CN108513088A - The method and device of group's video session - Google Patents

The method and device of group's video session Download PDF

Info

Publication number
CN108513088A
CN108513088A CN201710104439.2A CN201710104439A CN108513088A CN 108513088 A CN108513088 A CN 108513088A CN 201710104439 A CN201710104439 A CN 201710104439A CN 108513088 A CN108513088 A CN 108513088A
Authority
CN
China
Prior art keywords
user
group
video
virtual
video session
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710104439.2A
Other languages
Chinese (zh)
Other versions
CN108513088B (en
Inventor
李凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710104439.2A priority Critical patent/CN108513088B/en
Priority to PCT/CN2018/075749 priority patent/WO2018153267A1/en
Priority to TW107106428A priority patent/TWI650675B/en
Publication of CN108513088A publication Critical patent/CN108513088A/en
Priority to US16/435,733 priority patent/US10609334B2/en
Application granted granted Critical
Publication of CN108513088B publication Critical patent/CN108513088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of method and devices of group's video session, belong to VR (Virtual Reality, virtual reality) field.This method includes:Create group's video session;For each user in group's video session, according to the facility information of user, determine the user type of user, user type includes ordinary user and Virtual User, ordinary user is used to indicate user and two dimensional mode, Virtual User is used to be used to indicate user and use virtual reality display pattern when participating in group's video session when participating in group's video session;Video display modes indicated by user type according to user, the video data of group's video session is handled, the target video data of user is obtained, the video display modes of target video data are matched with the video display modes indicated by the user type of user;During the progress of group's video session, target video data is sent to the user equipment of user.Flexibility of the present invention in group's video session is strong.

Description

The method and device of group's video session
Technical field
The present invention relates to VR (Virtual Reality, virtual reality) technical field, more particularly to a kind of group's video councils The method and device of words.
Background technology
VR technologies are a kind of technologies that can be created with the experiencing virtual world, the environment true to nature that can be simulated and intelligently Perceive the behavior of user so that user feels on the spot in person.Therefore, application of the VR technologies in terms of social activity receives extensive pass Note, the method for carrying out group's video session based on VR technologies are come into being.
Currently, in group's video session, server can be that multiple Virtual User using VR equipment are created that virtually The virtual portrait that Virtual User selects is superimposed by environment with virtual environment, to express image of the Virtual User in virtual environment, In turn, the video that the audio of Virtual User is superimposed with image can be sent to Virtual User by server, be brought for Virtual User Vision and audio experience make Virtual User seemingly be talked animatedly with other Virtual User in the virtual world.
In the implementation of the present invention, the inventor finds that the existing technology has at least the following problems:
Virtual User can only carry out group's video session between Virtual User, many in VR equipment not yet universal today There is greatly communication disorders between the ordinary user and Virtual User of more unused VR equipment, when leading to group's video session Restricted strong, flexibility is poor.
Invention content
In order to solve problems in the prior art, an embodiment of the present invention provides a kind of methods and dress of group's video session It sets.The technical solution is as follows:
In a first aspect, a kind of method of group's video session is provided, the method includes:
Create group's video session;
The user is determined according to the facility information of the user for each user in group's video session User type, the user type includes ordinary user and Virtual User, and the ordinary user is used to indicate the user and exists It participates in using two dimensional mode, the Virtual User to be used to indicate the user described in participation when group's video session Virtual reality display pattern is used when group's video session;
According to the video display modes indicated by the user type of the user, to the video counts of group's video session According to being handled, the target video data of the user, the video display modes of the target video data and the use are obtained Video display modes matching indicated by the user type at family;
During the progress of group's video session, target video data is sent to the user equipment of the user, The user is set to carry out group's video session.
Second aspect provides a kind of method of group's video session, the method includes:
The target video data that server sends group's video session is received, the video of the target video data shows mould Formula is matched with the video display modes indicated by the user type of terminal user, and the user type of the terminal user is common uses Family, the ordinary user are used to indicate the terminal user and use two dimensional mode when participating in group's video session;
It shows the target video data, so that the ordinary user in group's video session is shown in the form of two-dimentional personage, institute The Virtual User in group's video session is stated to show in the form of two-dimensional virtual personage.
The third aspect provides a kind of method of group's video session, the method includes:
The target video data that server sends group's video session is received, the video of the target video data shows mould Formula is matched with the video display modes indicated by the user type of VR equipment users, and the user type of the VR equipment users is void Quasi- user, the Virtual User are used to indicate the VR equipment users and use virtual reality when participating in group's video session Display pattern;
It shows the target video data, makes the ordinary user in group's video session in virtual environment with two-dimentional personage Or the form of three dimensional character is shown, the Virtual User in group's video session is in the virtual environment with three-dimensional virtual human The form of object is shown.
Fourth aspect, provides a kind of device of group's video session, and described device includes:
Creation module, for creating group's video session;
Determining module, for for each user in group's video session, according to the facility information of the user, Determine that the user type of the user, the user type include ordinary user and Virtual User, the ordinary user is for referring to Show that the user uses two dimensional mode, the Virtual User to be used to indicate the use when participating in group's video session Family uses virtual reality display pattern when participating in group's video session;
Processing module regards the group for the video display modes indicated by the user type according to the user The video data of frequency session is handled, and the target video data of the user is obtained, and the video of the target video data is aobvious Show that pattern is matched with the video display modes indicated by the user type of the user;
Sending module, for during the progress of group's video session, being sent to the user equipment of the user Target video data makes the user carry out group's video session.
5th aspect, provides a kind of device of group's video session, described device includes:
Receiving module sends the target video data of group's video session, the target video number for receiving server According to video display modes matched with the video display modes indicated by the user type of terminal user, the use of the terminal user Family type is ordinary user, and the ordinary user is used to indicate the terminal user and is used when participating in group's video session Two dimensional mode;
Display module makes the ordinary user in group's video session with two-dimentional people for showing the target video data Object form shows that the Virtual User in group's video session is shown in the form of two-dimensional virtual personage.
6th aspect, provides a kind of device of group's video session, described device includes:
Receiving module sends the target video data of group's video session, the target video number for receiving server According to video display modes matched with the video display modes indicated by the user type of VR equipment users, the VR equipment users User type be Virtual User, the Virtual User is used to indicate the VR equipment users and is participating in group's video session Shi Caiyong virtual reality display patterns;
Display module makes the ordinary user in group's video session in virtual ring for showing the target video data It is shown in the form of two-dimentional personage or three dimensional character in border, the Virtual User in group's video session is in the virtual environment In in the form of three-dimensional personage show.
The embodiment of the present invention is handled by the user type of each user in determining group video session according to user type The video data of group's video session, to when user type be Virtual User when, can obtain with indicated by Virtual User The matched target video data of virtual reality display pattern can obtain and ordinary user when user type is ordinary user The matched target video data of indicated two dimensional mode, to use rational display pattern for different types of user Show video data so that group's video session can be carried out between different types of user without restriction, improve group The flexibility of video session.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, other are can also be obtained according to these attached drawings Attached drawing.
Fig. 1 is a kind of implementation environment schematic diagram of group's video session provided in an embodiment of the present invention;
Fig. 2 is a kind of method flow diagram of group's video session provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of user display location provided in an embodiment of the present invention;
Fig. 4 is a kind of schematic diagram of group's video session scene provided in an embodiment of the present invention;
Fig. 5 is a kind of display schematic diagram of a scenario provided in an embodiment of the present invention;
Fig. 6 is the flow chart that a kind of Virtual User provided in an embodiment of the present invention carries out group's video session;
Fig. 7 is a kind of device block diagram of group's video session provided in an embodiment of the present invention;
Fig. 8 is a kind of device block diagram of group's video session provided in an embodiment of the present invention;
Fig. 9 is a kind of device block diagram of group's video session provided in an embodiment of the present invention;
Figure 10 is a kind of terminal structure schematic diagram provided in an embodiment of the present invention;
Figure 11 is a kind of block diagram of the device 1100 of group's video session provided in an embodiment of the present invention.
Specific implementation mode
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention Formula is described in further detail.
Fig. 1 is a kind of implementation environment schematic diagram of group's video session provided in an embodiment of the present invention.Referring to Fig. 1, the reality Applying environment includes:
At least one terminal 101 (e.g., mobile terminal and tablet computer), at least one VR equipment 102 and at least one clothes Business device 103.Wherein, the group that the interactive process of terminal 101, VR equipment 102 and server 103 can correspond in following embodiments regards The process of frequency session;Server 103 is used to create group's video session for different types of user, receives and processes terminal 101 With transmitted by VR equipment 102 video data, will treated that video data is sent to terminal 101 or VR equipment 102 so that no Group's video session can be carried out between the user of same type.The video data that terminal 101 is used to take camera is real-time It is sent to server 103, is received and treated the video data of display server 103.VR equipment 102 is for adopting sensing equipment The behavioural characteristic data of the user collected are sent to server 103, receive and treated the video data of display server 103.
Fig. 2 is a kind of method flow diagram of group's video session provided in an embodiment of the present invention.Referring to Fig. 2, this method is answered For server and terminal, the interactive process of VR equipment.
201, server creates group's video session.
Group's video session refers to the video session that multiple (two or more) users are carried out based on server.Its In, multiple users can be multiple users in the corresponding social platform of the server, may be group between multiple user Relationship or friend relation.
In the step, when server receives group's video session request of any user equipment, group can be created and regarded Frequency session.The embodiment of the present invention does not limit the initiation mode of group's video session request.For example, by certain user built Group's video session request is initiated to all users in the group in vertical group, in the citing, group's video session request The group identification of the group can be carried so that server can obtain the user of each user in the group according to group identification Mark.In another example the user can also initiate group after selecting some users in established group or in customer relationship chain Group video session request, in the citing, group's video session request can carry user's mark of the user and selected user Know.After server gets user identifier, the corresponding user of user identifier can be added in group's video session, to create Build group's video session.
202, for each user in group's video session, server determines the use according to the facility information of the user The user type at family.
Facility information can be the unit type of user equipment used in user login services device, the performance of unit type Form is such as:Mobile phone brand+mobile phone model so that server can be determined according to the correspondence of unit type and device type should The device type of user equipment, device type can be PC (Personal Computer, PC) terminal, mobile terminal Or VR equipment.
In the step, server can obtain facility information in several ways, for example, user equipment is sent to server When logging request, logging request can carry user identifier and facility information so that server can when receiving logging request User identifier and facility information, and corresponding storage are extracted, alternatively, server is asked to user equipment sending device acquisition of information It asks so that facility information is sent to server by user equipment.
Since the user in group's video session may use different user equipment login service devices, different users to set The standby video display modes supported are different (VR equipment supports virtual reality display pattern, terminal to support two dimensional mode).Cause This, server needs using the user of different user devices to handle video data in different ways, to obtain and user The matched video data of video display modes that equipment is supported, and video data is handled for some user in order to determine how, clothes Business device needs first to determine the user type of the user.User type includes ordinary user and Virtual User, and ordinary user is for referring to Show that user uses two dimensional mode when participating in group's video session, if the user is ordinary user, illustrates that the user is Using the user of non-VR equipment login service device, non-VR equipment such as mobile terminal, tablet computer etc., Virtual User is used to indicate use Family uses virtual reality display pattern when participating in group's video session, if the user is Virtual User, illustrates that the user is Use the user of VR equipment login service devices.
In the step, server can be according to preconfigured facility information, device type pass corresponding with user type User type corresponding with the facility information of user is inquired by system.The citing of the correspondence is referring to table 1:
Table 1
Facility information Device type User type
XX thinkpad PC terminals Ordinary user
WW N7 Mobile terminal Ordinary user
UU VR VR equipment Virtual User
In fact, facility information can also be voluntarily arranged in user, for example, providing facility information in VR equipment is arranged page Face, VR equipment users can set current facility information to " WW N7 ", can also retain " the UU N7 " of default setting, make The facility information set by VR equipment users can be got by obtaining server, so that it is determined that VR equipment users tend to the user of experience Type.
203, the video display modes indicated by user type of the server according to user, to the video of group's video session Data are handled, and the target video data of user is obtained.
Wherein, the video display modes indicated by the video display modes of target video data and the user type of user Match.In the step, if the user type of the user is ordinary user, server determines that the user is participating in this group's video Two dimensional mode is used when session, and uses video data processing mode corresponding with two dimensional mode for the user, such as The user type of the fruit user is Virtual User, and server determines that the user uses virtual reality when participating in this video session Display pattern, and use video data processing mode corresponding with virtual reality display pattern for the user.The embodiment of the present invention Specific processing procedure is not limited.In the following, be directed to the corresponding video data processing mode of each type of user, respectively into Row is introduced:
Processing procedure such as following steps 203A-203C when user type is ordinary user:
If the user type of 203A, the user are ordinary user, server is by Virtual User pair in group's video session The three-dimensional personage answered is converted to two-dimensional virtual personage.
Three-dimensional personage is used to express the figure image of Virtual User with 3 d image data so that in group's video council The user can be shown as to three-dimensional personage when words.In the step, server can obtain three-dimensional void in several ways Anthropomorphic object.For example, before Virtual User confirms and enters group's video session, multiple three-dimensional virtual humans are provided for Virtual User Object, using the selected three-dimensional personage of Virtual User as the corresponding three-dimensional personage of the Virtual User.In another example service Device obtains the user property of the Virtual User, will be corresponding as the Virtual User with the matched three-dimensional personage of user property Three-dimensional personage, in the citing, user property includes the information such as age, gender and occupation, with the user property of Virtual User It is for 30 years old schoolmarm, server can select the three-dimensional personage of schoolmarm's image corresponding as the Virtual User Three-dimensional personage.
Further, three-dimensional personage can be converted into two dimension by server based on the three-dimensional personage got Virtual portrait, it should be noted that two-dimensional virtual personage can be static, can also be dynamic, the embodiment of the present invention This is not limited.For example, in order to save the calculation resources of server, it can be directly from the corresponding graphics of three-dimensional personage As extracting data goes out the two-dimensional image data at a certain visual angle, using the two-dimensional image data at the visual angle as two-dimensional virtual personage, In order to comprehensively express Virtual User as far as possible, which can be positive visual angle.In another example in order to visually show virtual use The behavior at family, server can obtain the behavioural characteristic data of three-dimensional personage and the collected Virtual User of VR equipment, should Behavioural characteristic data include the expressive features data or limbs characteristic of Virtual User, and in turn, server can be according to behavior Characteristic determines the behavioural characteristic of three-dimensional personage, generates the three-dimensional personage met with behavioural characteristic so that three-dimensional The behavior of virtual portrait and the behavior synchronization of Virtual User, then three-dimensional personage is converted into two-dimensional virtual personage.
203B, server to two-dimensional virtual personage, Virtual User selection two-dimensional background and the corresponding sound of Virtual User Frequency obtains the first two-dimensional video data according to being synthesized.
Based on the two-dimensional virtual personage that step 203A is got, in order to provide more rich visual effect to the user, clothes Business device can also be that two-dimensional virtual personage adds two-dimensional background.The two-dimensional background refers to the background of two-dimensional virtual personage, such as two Tie up conference background and two-dimentional sandy beach background.Server can provide multiple two before entering group's video session for Virtual User Background is tieed up, or obtains the selected two-dimensional background of Virtual User.In fact, server can also obtain by other means this two Background is tieed up, for example, obtaining the corresponding two-dimensional background of the Virtual User at random.In another example in order to give group's video session as far as possible In user bring identical experience effect, server can the corresponding virtual environment mapped two dimension of group's video session Image data is as two-dimensional background, alternatively, server can obtain the label of the virtual environment, it will two dimension identical with the label Image data is as two-dimensional background, and e.g., the label of virtual environment is " forest ", and label can be the two dimension of " forest " by server Image data is as two-dimensional background, and certainly, which can be static, can also be dynamic.
In the step, server can determine display location of the two-dimensional virtual personage on two-dimensional background and synthesis size, The display size original to two-dimensional virtual personage is adjusted, and the two-dimensional virtual personage for meeting synthesis size is obtained, by the two dimension Virtual portrait is blended into corresponding display location on two-dimensional background, and the figure layer of two-dimensional virtual personage two-dimensional background figure layer it On, obtain Virtual User currently corresponding image data.In fact, server can also determine on two-dimensional background with display location With the corresponding display area of synthesis size, the pixel in the display area is removed, and by the corresponding figure of two-dimensional virtual personage As data are embedded in the display area, to using the two-dimensional image data after being embedded in as the current corresponding picture number of Virtual User According to.
During group's video session, when any user is made a speech, audio number that user equipment can will be recorded Server is sent to when factually, it therefore, can will be current when server receives the corresponding audio data of the Virtual User Image data is synthesized with audio data, obtains the first two-dimensional video data, to express the current words and deeds of Virtual User.When It so, can be directly by current image data if server is currently without the corresponding audio data of the Virtual User is received As the first two-dimensional video data.
203C, server close at least one first two-dimensional video data and at least one second two-dimensional video data At obtaining the target video data of the user.
Second two-dimensional video data refers to the two-dimensional video data of ordinary user in group's video session.In the step, clothes Business device determines display location and the synthesis size for the two-dimensional video data that each user is current in group's video session, by each use The current video data in family synthesizes a two-dimensional video number according to identified display location and synthesis size with virtual environment According to and the figure layer of the two-dimensional video data of user is on the figure layer of virtual environment, using the two-dimensional video data of synthesis as should The target video data of user.
It should be noted that the two step building-up processes of step 202B and 202C may correspond to a building-up process, it should In building-up process, server omit synthesis the first two-dimensional video data the step of, directly to two-dimensional virtual personage, two-dimensional background, The corresponding audio data of Virtual User and the second two-dimensional video data are synthesized, to obtain target video data.
Processing procedure such as following steps 203D-203H when user type is Virtual User:
If the user type of 203D, the user are Virtual User, server determines that group's video session is corresponding virtual Environment.
Virtual environment refers to three-dimensional background of the Virtual User in group's video session, e.g., roundtable conference virtual environment, sand The 3-D views such as beach virtual environment and table trip virtual environment.The embodiment of the present invention is to determining that the concrete mode of virtual environment does not limit It is fixed.For example, following three kinds of methods of determination may be used in server:
The corresponding virtual environment of virtual environment option that user triggers is determined as user by the first method of determination, server The corresponding virtual environment in group's video session.
To make the process more hommization for providing virtual environment, server can provide diversified virtual environment, and by Virtual environment when user's unrestricted choice group's video session.In the method for determination, server can be in VR equipment (or and VR The terminal of apparatus bound) at least one virtual environment option and corresponding virtual environment thumbnail, each virtual environment are provided Option corresponds to a virtual environment.It, can when VR equipment detects trigger action of the Virtual User to some virtual environment option It can when server gets virtual environment mark to send virtual environment option corresponding virtual environment mark to server It is determined as virtual environment of the user in group's video session so that the virtual environment is identified corresponding virtual environment.
Second of method of determination, according to the number of users in group's video session, determine the corresponding void of group's video session The virtual environment for meeting capacity is determined as the corresponding virtual environment of group's video session by the capacity in near-ring border.
In order to which rational virtual environment is presented to user, seem crowded or spacious to avoid virtual environment, the determination side In formula, server can obtain the number of users in group's video session, so that it is determined that the capacity that virtual environment should have, it should Capacity is used to indicate the number of users that virtual environment can accommodate, for example, the capacity of roundtable conference virtual environment corresponds to the void Numbers of seats in near-ring border.Further, server, can be from stored multiple virtual environments according to identified capacity Middle selection one and the most similar virtual environment of the capacity.For example, number of users is 12, three roundtable conferences of server storage Virtual environment, the numbers of seats in each roundtable conference virtual environment is 5,10 and 15, therefore server can be by numbers of seats It is determined as the user corresponding virtual environment in group's video session for 12 roundtable conference virtual environment.
The virtual environment that each user in the third method of determination, analysis group video session selected, obtains each Virtual environment by selection number, will the corresponding virtual ring of group's video session be determined as by the most virtual environment of selection number Border.
In the method for determination, the virtual environment that server was selected by each user of comprehensive analysis has obtained more use The virtual environment that family is had a preference for.For example, there is 5 users in group's video session, the case where each user selects virtual environment, is such as Shown in table 2, therefore, server can determine that the virtual environment 1 is most (4 times) by selection number by table 2, by virtual environment 1 It is determined as the user corresponding virtual environment in group's video session.
Table 2
It should be noted that in three of the above method of determination, in order to save the calculation resources of server, server is certain After one user determines virtual environment, can the corresponding virtual environment of the user be directly determined as in group's video session each void The corresponding virtual environment of quasi- user.
In fact, arbitrary two or three of method of determination in three of the above method of determination can also be combined, the present invention Embodiment does not limit combination.For example, the first method of determination and the third method of determination combine, if server connects Receive the virtual environment mark of user triggering, it is determined that virtual environment identifies corresponding virtual environment, and otherwise, server uses The third method of determination.
203E, using virtual environment as three-dimensional background, server determines each user in group's video session in virtual ring Display location in border.
In the step, to make each user in group's video session reasonably incorporate virtual environment, server it needs to be determined that Each display location of the user in virtual environment, which refers to the synthesising position or void of the video data of ordinary user The synthesising position of the three-dimensional personage of quasi- user.The embodiment of the present invention does not limit the mode of determining display location, for example, For the user, the visual angle that can give tacit consent to the user is positive visual angle, makes the court of the corresponding three-dimensional personage of the user To consistent with the direction at positive visual angle.Therefore, which can show in group's video session, can not also show, if It has been shown that, referring to Fig. 3, which can be with the display location of arrow meaning in corresponding diagram 3.In addition, for other users, service Following five kinds of methods of determination (method of determination 1- methods of determination 5) may be used to determine display location in device.
Method of determination 1, according to the social data between other users in the user and group's video session, analysis user with Cohesion between other users arranges the display of other users according to cohesion sequence since the either side of the user Position.
In order to build session context more true to nature, social activity when which accounts for each user's actual session is inclined To foundation cohesion determines the display location of each user.Wherein, social data be not limited to chat number, as good friend when The data such as long and comment like time.The embodiment of the present invention does not limit the method for analyzing cohesion.For example, indicating parent with C Density, chat number indicated with chat, weight 0.4;It is indicated with time as the duration of good friend, weight 0.3;Comment thumbs up Number indicates with comment, weight 0.3, then cohesion can be expressed as:
C=0.4*chat+0.3*time+0.3*comment
Therefore, if other users are respectively user 1, user 2, user 3 and user 4, between these users and the user Social data referring to table 3, the cohesion between these users user is indicated with C1, C2, C3 and C4, then C1 is 37, C2 For 4, C3 82, C4 76.Therefore, the location determination nearest apart from the user can be the display location of user 3 by server, And the display location of user 4, user 1 and user 2 are arranged in order according to cohesion height.
Table 3
User Chat (secondary) Time (day) Comment (secondary)
User 1 10 100 days 10 times
User 2 1 10 days 2 times
User 3 40 200 days 20 times
User 4 100 100 days 20 times
Method of determination 2, the user identity for obtaining other users, the opposite side of the user is determined as using in other users The display location of the highest user of family identity, and the display location of remaining users in other users is determined at random.
In order to protrude leading role of the certain user in group's video session, server can be determined according to user identity Display location.Wherein, user identity is used to indicate significance level of the user in this group's video session.The present invention is implemented Example does not limit the standard for weighing user identity.For example, if user A is that the initiation of group's video session is used in other users Family illustrates that user A is likely to dominate this group's video session, therefore user A is determined as the highest user of identity.Example again Such as, if user B is the administrator in the corresponding group of group's video session in other users, user B can also be determined For the highest user of identity.
Method of determination 3, the chronological order that group's video session is added according to other users, are opened from the either side of user The display location for the arrangement other users that begin.
Process in order to determine display location is easier, saves the calculation resources of server, can be with direct basis user The time that group's video session is added determines display location.Usually, voluntarily it is confirmed whether that group's video session is added by user, Therefore, it when user equipment detects confirmation operation of a certain user to group's video session is added, can be sent to server Confirm and message is added, it, can be by this really when server receives first confirmation in group's video session, and message is added Recognize the corresponding user of addition message and be arranged in the display location nearest with the user distance, and is received really after being arranged in order Recognize the display location that the corresponding user of message is added.
The selected location determination of the user is by method of determination 4, the position selected in virtual environment according to the user Display location of the user in virtual environment.
In order to determine that process more wilfulization of display location, server also support user voluntarily to select display location.It should In method of determination, each user of forward direction that server can start in group's video session provides virtual environment template, by each User voluntarily selects display location in virtual environment template, certainly, in order to avoid each user sends out when selecting display location Raw conflict, server ought to current selected display location more aobvious in real time, for example, when a certain display location is selected, Server can be that not optional label is added in the display location so that each user selects display in optional display location Position.
Method of determination 5, the display location that the opposite side of the user is determined as to ordinary user, and other use are determined at random The display location of remaining users in family.
In view of ordinary user shows generally in the form of two-dimentional personage, in three-dimensional virtual environment, in order to avoid this is general The corresponding two-dimensional video data distortion in general family, to show that the overall picture of ordinary user, server can be by the user'ss as far as possible Opposite side is determined as the display location of ordinary user, and determines the display location of remaining users at random.
It should be noted that each user ought to correspond to one piece of display area, therefore, when a certain user A selects one to show When showing position, server is it is confirmed that display area corresponding to user A.Moreover, in order to show each use in virtual environment Spacing when family is more uniform, and server can mark off display area in virtual environment in advance, for example, for roundtable conference Virtual environment corresponds to one piece of display area at each seat.
Certainly, two or more arbitrary method of determination in above five kinds of methods of determination can also be combined, for example, Method of determination 4 and method of determination 5 combine, and the opposite side of the user is first determined as the display location of ordinary user by server, And virtual environment template is provided to each Virtual User, and be the display location that ordinary user determines in the virtual environment template Place has not optional label so that each Virtual User can voluntarily select a display location in optional display location.
203F, for the ordinary user in group's video session, server is by the designated Data Synthesis of ordinary user To the corresponding display location of the ordinary user.
Designated data refer to that the virtual reality that meets that the video data based on the ordinary user received obtains is shown The video data of pattern, in the step, since ordinary user includes the first ordinary user and the second ordinary user, first common uses Family refers to the ordinary user using binocular camera, and the second ordinary user refers to the ordinary user using monocular cam, two kinds The video data of ordinary user is different, thus server obtain designated data mode it is also different, the embodiment of the present invention with Situation 1 and situation 2 illustrate:
If situation 1, ordinary user include the first ordinary user, the two-way two-dimensional video data of the first ordinary user is turned It is changed to the first three dimensional video data, using the first three dimensional video data as designated data, or, if ordinary user includes the One ordinary user, using the two-way two-dimensional video data of the first ordinary user as designated data.
In this case, in order to show that the first ordinary user, server can be in the form of three dimensional character in virtual environment Designated data are obtained using two ways:
Two-way two-dimensional video data is converted into the first three dimensional video data by first way.Due to two-way two-dimensional video The actual scene of the data corresponding ordinary user captured from two visual angles respectively, with a wherein picture of two-dimensional video data all the way Vegetarian refreshments is reference, determines that pixel corresponding with the pixel, the two pixels correspond to actual field in another way two-dimensional video Same position in scape, so that it is determined that the parallax of two pixels, each pixel in two-way two-dimensional video data is through above-mentioned place After reason, disparity map can be obtained, the 3 d image data of actual scene is constructed according to disparity map.
The second way, directly using two-way two-dimensional video data as designated data, sent out by designated data It when sending to VR equipment, also sends and specifies idsplay order, which is used to indicate VR equipment by two-way two-dimensional video number According to rendering respectively in right and left eyes screen, by being rendered the two-way two-dimensional video data of different visual angles respectively in right and left eyes screen In, parallax can be formed in display, reach Three-dimensional Display effect.
If situation 2, ordinary user include the second ordinary user, using the two-dimensional video data of the second ordinary user as referring to Determine video data.
It should be noted that the embodiment of the present invention is to determining that the mode of the user type of ordinary user does not limit.For example, If server receives the two-way two-dimensional video data of an ordinary user simultaneously, it may be determined that the user class of the ordinary user Type is the first ordinary user, otherwise, it may be determined that the ordinary user is the second ordinary user.
Based on the designated data that the display location determined step 203E and step 202F obtain, server can be with By the designated Data Synthesis to the corresponding display location of the ordinary user.Certainly, it in order to which display effect is truer, is closing At before, server can adjust the corresponding display size of designated data to this according to the synthesis size of default setting Size is synthesized, which can correspond to by the ratio-dependent of virtual environment and real person, each virtual environment One synthesis size.
It should be noted that since the designated data are only that a visual angle (for the second ordinary user) or two regard The video data at angle (for the first ordinary user), the two dimension that the designated data only occupy in virtual environment in synthesis are empty Between position.Moreover, the display location of each ordinary user is different, in order to provide the user with better display effect, server can To add frame as the layer edges of designated data in synthesis so that the display effect of designated data is to render On " virtual screen " in virtual environment.Certainly, if the display location of two or more designated data is adjacent, Server can also add frame in synthesis for the layer edges of these designated data so that two or more Ordinary user can be shown in one " virtual screen ".Referring to Fig. 4, an embodiment of the present invention provides a kind of group's video sessions The schematic diagram of scene, if in Fig. 4 shown in (a) figure, ordinary user display in one " virtual screen ", in Fig. 4 (b) Shown in figure, two ordinary users display in one " virtual screen ".
203G, for the Virtual User in group's video session, server is by the three-dimensional personage of Virtual User and sound Frequency evidence is blended into the corresponding display location of Virtual User.
In the step, server can obtain the three-dimensional personage of Virtual User, and (acquisition process and step 203A are same Reason), three-dimensional personage is adjusted to synthesis size, it is corresponding aobvious that the three-dimensional personage after adjustment is blended into Virtual User Show position, and the 3 d image data after synthesis is synthesized with the audio data of the Virtual User got, obtains the Virtual User Audio, video data.
203H, server are using the video data after synthesis as the target video data of user.
By the building-up process of step 203F and 203G, server may finally obtain target video data, which regards Frequency includes the corresponding virtual portrait of each Virtual User and the video of each ordinary user in group's video session in Data.
204, during the progress of group's video session, server sends target video number to the user equipment of user According to making the user carry out group video session.
For each user in group's video session, if the user type of the user is ordinary user, service The obtained target video datas of step 203A-203C can be sent to the terminal of the user by device, if the user of the user Type is Virtual User, and the obtained target video datas of step 203D-203H can be sent to the VR of the user by server Equipment so that each user can carry out group's video session.Referring to Fig. 5, an embodiment of the present invention provides a kind of display fields Scape schematic diagram.Wherein, it is terminal user with the user of terminal login service device, is set for VR with the user of VR equipment login service devices Standby user.
It should be noted that certain user during group's video session can also have specified administration authority, Specified administration authority refers to inviting or removing the permission of user during group's video session, and the embodiment of the present invention is to which There is user specified administration authority not limit.For example, server can specify this administration authority to group's video session User is initiated to open.As shown in fig. 6, an embodiment of the present invention provides the flows that a kind of Virtual User carries out group's video session Figure.The Virtual User can invite the other users except group's video session to enter group's video session, can also will be a certain User removes from group's video session, private chat request can also be sent to other users, or receive the private chat of other users Request.
205, when terminal, which receives server, sends the target video data of group's video session, display target video counts According to making the ordinary user in group's video session show that the Virtual User in group's video session is with two in the form of two-dimentional personage The form of dimension virtual portrait is shown.
The user type of terminal user is ordinary user, and therefore, terminal user uses two when participating in group's video session Tie up display pattern.
Since the two-dimensional video data of each user has been synthesized in server side according to display location and display size, When terminal receives target video data, the target video data can be rendered on the screen, it is each on the screen Region shows the two-dimentional personage of ordinary user or the corresponding two-dimensional virtual personage of Virtual User.
206, when VR equipment, which receives server, sends the target video data of group's video session, display target video Data make the ordinary user in group's video session be shown in the form of two-dimentional personage or three dimensional character in virtual environment, group Virtual User in group video session is shown in virtual environment in the form of three-dimensional personage.
The user type of VR equipment users is Virtual User, and therefore, VR equipment users adopt when participating in group's video session With virtual reality display pattern.
Due to the corresponding three-dimensional virtual human of two-dimensional video data or three dimensional video data and Virtual User of ordinary user Object is synthesized in server side according to display location, can be in VR equipment when VR equipment receives target video data Right and left eyes screen in render the target video data so that VR equipment can on the corresponding display location of ordinary user, show Show two-dimentional personage or the three dimensional character of ordinary user, and on the corresponding display location of Virtual User, shows Virtual User Three-dimensional personage.
In addition, for the user for clearly prompting VR equipment users making a speech, it is based on target video data, if VR equipment It detects that any user is being made a speech in group's video session, speech prompt is shown on the corresponding display location of the user.Its In, the form of expression for prompt of making a speech is not limited to text prompt, the arrow icon or the flashing icon of " speech " etc..The present invention is real The mode whether example makes a speech to detection user is applied not limit.For example, when VR equipment is detected from current target video data To the user audio data when, determine that the user is making a speech, and further determine that the corresponding display location of the user, at it Speech prompt is shown on display location.
The embodiment of the present invention is handled by the user type of each user in determining group video session according to user type The video data of group's video session, to when user type be Virtual User when, can obtain with indicated by Virtual User The matched target video data of virtual reality display pattern can obtain and ordinary user when user type is ordinary user The matched target video data of indicated two dimensional mode, to use rational display pattern for different types of user Show video data so that group's video session can be carried out between different types of user without restriction, improve group The flexibility of video session.
In addition, when the user type of user is ordinary user, by the corresponding three-dimensional of Virtual User in group's video session Virtual portrait is converted to two-dimensional virtual personage, and two-dimensional virtual personage and two-dimensional background, audio data are synthesized, and is somebody's turn to do The two-dimensional video data of Virtual User so that the two-dimensional video data of Virtual User two dimensional mode corresponding with the user Match, to provide the concrete mode of the video data of Virtual User in processing group video session for the user.
In addition, when the user type of user is Virtual User, it may be determined that each user is in void in group's video session The three-dimensional personage of the two-dimensional video data of ordinary user and Virtual User is respectively synthesized by the display location in near-ring border To corresponding display location so that the video data of synthesis virtual reality display pattern corresponding with the user matches, to for The user provides the concrete mode of the video data of Virtual User in processing group video session.
In addition, for the first ordinary user and the second ordinary user, the side of different acquisition designated data is provided Formula:The two-way two-dimensional video data of first ordinary user is processed into the first three dimensional video data, or directly regards two-way two dimension Frequency evidence is retrieved as designated data, and informs VR equipment display modes;The two-dimensional video data of second ordinary user is made For designated data.By two different acquisition modes, can intelligently provide corresponding with the user type of ordinary user Display effect.
Additionally, it is provided the specific method of the corresponding virtual environment of at least three kinds determining group video sessions, can both prop up It holds user and voluntarily selects virtual environment, capacity and number of users can also be selected according to the number of users in group's video session Matched virtual environment can also analyze the virtual environment that each user once selected, and select by the most void of selection number Near-ring border so that determine that the mode of virtual environment is more various.
Additionally, it is provided at least five kinds of methods of determination, with display location of each user of determination in virtual environment:Foundation The time of group's video session is added in cohesion, user identity or user between user, is intelligently each use by server Family selects seat, alternatively, more display location is voluntarily selected by user to hommization, alternatively, in order to show common use as far as possible The overall picture at family is opposite with the positive visual angle of the user by the display location of ordinary user.
Fig. 7 is a kind of device block diagram of group's video session provided in an embodiment of the present invention.Referring to Fig. 7, the device is specific Including:
Creation module 701 creates group's video session;
Determining module 702 determines user for each user in group's video session according to the facility information of user User type, user type includes ordinary user and Virtual User, and ordinary user is used to indicate user and is participating in group's video Two dimensional mode is used when session, it is aobvious using virtual reality when participating in group's video session that Virtual User is used to indicate user Show pattern;
Processing module 703, for the video display modes indicated by the user type according to user, to group's video session Video data handled, obtain the target video data of user, the video display modes of target video data and user's Video display modes matching indicated by user type;
Sending module 704, for during the progress of group's video session, sending target to the user equipment of user and regarding Frequency evidence makes user carry out group's video session.
The embodiment of the present invention is handled by the user type of each user in determining group video session according to user type The video data of group's video session, to when user type be Virtual User when, can obtain with indicated by Virtual User The matched target video data of virtual reality display pattern can obtain and ordinary user when user type is ordinary user The matched target video data of indicated two dimensional mode, to use rational display pattern for different types of user Show video data so that group's video session can be carried out between different types of user without restriction, improve group The flexibility of video session.
In a kind of possible realization method, processing module 703 is used for:It, will if the user type of user is ordinary user The corresponding three-dimensional personage of Virtual User is converted to two-dimensional virtual personage in group's video session;To two-dimensional virtual personage, void The two-dimensional background and the corresponding audio data of Virtual User of quasi- user's selection synthesize, and obtain the first two-dimensional video data; At least one first two-dimensional video data and at least one second two-dimensional video data are synthesized, the target for obtaining user regards Frequency evidence, the second two-dimensional video data refer to the two-dimensional video data of ordinary user in group's video session.
In a kind of possible realization method, processing module 703 is used for:If the user type of user is Virtual User, really The corresponding virtual environment of grouping group video session;Using virtual environment as three-dimensional background, each use in group's video session is determined Display location of the family in virtual environment;For the ordinary user in group's video session, by the designated number of ordinary user According to being blended into the corresponding display location of ordinary user;For the Virtual User in group's video session, by the three-dimensional of Virtual User Virtual portrait and audio data are blended into the corresponding display location of Virtual User;Using the video data after synthesis as the mesh of user Mark video data.
In a kind of possible realization method, processing module 703 is additionally operable to:If ordinary user includes the first ordinary user, The two-way two-dimensional video data of first ordinary user is converted into the first three dimensional video data, using the first three dimensional video data as Designated data, the first ordinary user refer to the ordinary user using binocular camera, or, if ordinary user includes first Ordinary user, using the two-way two-dimensional video data of the first ordinary user as designated data;If ordinary user includes the Two ordinary users, using the two-dimensional video data of the second ordinary user as designated data, the second ordinary user refers to using The ordinary user of monocular cam.
In a kind of possible realization method, processing module 703 is used for:The corresponding void of virtual environment option that user is triggered Near-ring border is determined as user's corresponding virtual environment in group's video session;Or,
Processing module 703 is used for:According to the number of users in group's video session, the corresponding void of group's video session is determined The virtual environment for meeting capacity is determined as the corresponding virtual environment of group's video session by the capacity in near-ring border;Or,
Processing module 703 is used for:The virtual environment that each user in analysis group video session selected, obtains each Virtual environment by selection number, will the corresponding virtual ring of group's video session be determined as by the most virtual environment of selection number Border.
In a kind of possible realization method, processing module 703 is used for:According to other users in user and group's video session Between social data, the cohesion between user and other users is analyzed, according to cohesion sequence from any of user Side starts to arrange the display location of other users;Or,
Processing module 703 is used for:The user identity for obtaining other users, is determined as other users by the opposite side of user The display location of the middle highest user of user identity, and the display location of remaining users in other users is determined at random;Or,
Processing module 703 is used for:The chronological order of group's video session is added according to other users, from appointing for user Side starts to arrange the display location of other users;Or,
Processing module 703 is used for:The position selected in virtual environment according to user, by the selected location determination of user The display location for being user in virtual environment;Or,
Processing module 703 is used for:The opposite side of user is determined as to the display location of ordinary user, and determines it at random The display location of remaining users in his user.
The alternative embodiment that any combination forms the present invention may be used, herein no longer in above-mentioned all optional technical solutions It repeats one by one.
Fig. 8 is a kind of device block diagram of group's video session provided in an embodiment of the present invention.Referring to Fig. 8, the device is specific Including:
Receiving module 801 sends the target video data of group's video session, target video data for receiving server Video display modes matched with the video display modes indicated by the user type of terminal user, the user type of terminal user For ordinary user, ordinary user is used to indicate terminal user and uses two dimensional mode when participating in group's video session;
Display module 802 is used for display target video data, makes the ordinary user in group's video session with two-dimentional personage Form shows that the Virtual User in group's video session is shown in the form of two-dimensional virtual personage.
The embodiment of the present invention is by receiving target video data, since target video data is server according to user type Processing obtains so that the target video data is matched with the two dimensional mode indicated by ordinary user, to be terminal user Video data is shown using rational display pattern so that can carry out group without restriction between different types of user and regard Frequency session improves the flexibility of group's video session.
Fig. 9 is a kind of device block diagram of group's video session provided in an embodiment of the present invention.Referring to Fig. 9, the device is specific Including:
Receiving module 901 sends the target video data of group's video session, target video data for receiving server Video display modes matched with the video display modes indicated by the user type of VR equipment users, the user of VR equipment users Type is Virtual User, and Virtual User is used to indicate VR equipment users and is shown using virtual reality when participating in group's video session Pattern;
Display module 902 is used for display target video data, makes the ordinary user in group's video session in virtual environment In shown in the form of two-dimentional personage or three dimensional character, the Virtual User in group's video session is in virtual environment with three-dimensional empty The form of anthropomorphic object is shown.
The embodiment of the present invention is by receiving target video data, since target video data is server according to user type Processing obtains so that the target video data is matched with the two dimensional mode indicated by Virtual User, to be used for VR equipment Family shows video data using rational display pattern so that can carry out group between different types of user without restriction Video session improves the flexibility of group's video session.
In a kind of possible realization method, display module 902 is used for:On the corresponding display location of ordinary user, display The two-dimentional personage of ordinary user or three dimensional character;On the corresponding display location of Virtual User, show that the three-dimensional of Virtual User is empty Anthropomorphic object.
In a kind of possible realization method, display module 902 is additionally operable to:Based on target video data, if detecting group Any user is being made a speech in group video session, and speech prompt is shown on the corresponding display location of user.
The alternative embodiment that any combination forms the present invention may be used, herein no longer in above-mentioned all optional technical solutions It repeats one by one.
It should be noted that:Above-described embodiment provide group's video session device in group's video session, only with The division progress of above-mentioned each function module, can be as needed and by above-mentioned function distribution by not for example, in practical application Same function module is completed, i.e., the internal structure of device is divided into different function modules, to complete whole described above Or partial function.In addition, the device for group's video session that above-described embodiment provides and the method for group's video session are implemented Example belongs to same design, and specific implementation process refers to embodiment of the method, and which is not described herein again.
Figure 10 is a kind of terminal structure schematic diagram provided in an embodiment of the present invention.Referring to Figure 10, which includes:
Terminal 1000 may include RF (Radio Frequency, radio frequency) circuit 110, include one or more The memory 120 of computer readable storage medium, input unit 130, display unit 140, sensor 150, voicefrequency circuit 160, WiFi (Wireless Fidelity, Wireless Fidelity) module 170, include there are one or more than one processing core processing The components such as device 180 and power supply 190.It will be understood by those skilled in the art that terminal structure shown in Figure 10 is not constituted pair The restriction of terminal may include either combining certain components or different component cloth than illustrating more or fewer components It sets.Wherein:
RF circuits 110 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station After downlink information receives, one or the processing of more than one processor 180 are transferred to;In addition, the data for being related to uplink are sent to Base station.In general, RF circuits 110 include but not limited to antenna, at least one amplifier, tuner, one or more oscillators, use Family identity module (SIM) card, transceiver, coupler, LNA (Low Noise Amplifier, low-noise amplifier), duplex Device etc..In addition, RF circuits 110 can also be communicated with network and other equipment by radio communication.The wireless communication can make With any communication standard or agreement, and including but not limited to GSM (Global System of Mobile communication, entirely Ball mobile communcations system), GPRS (General Packet Radio Service, general packet radio service), CDMA (Code Division Multiple Access, CDMA), WCDMA (Wideband Code Division Multiple Access, wideband code division multiple access), LTE (Long Term Evolution, long term evolution), Email, SMS (Short Messaging Service, short message service) etc..
Memory 120 can be used for storing software program and module, and processor 180 is stored in memory 120 by operation Software program and module, to perform various functions application and data processing.Memory 120 can include mainly storage journey Sequence area and storage data field, wherein storing program area can storage program area, the application program (ratio needed at least one function Such as sound-playing function, image player function) etc.;Storage data field can be stored uses created number according to terminal 1000 According to (such as audio data, phone directory etc.) etc..In addition, memory 120 may include high-speed random access memory, can also wrap Include nonvolatile memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts. Correspondingly, memory 120 can also include Memory Controller, to provide processor 180 and input unit 130 to memory 120 access.
Input unit 130 can be used for receiving the number or character information of input, and generate and user setting and function Control related keyboard, mouse, operating lever, optics or the input of trace ball signal.Specifically, input unit 130 may include touching Sensitive surfaces 131 and other input equipments 132.Touch sensitive surface 131, also referred to as touch display screen or Trackpad are collected and are used Family on it or neighbouring touch operation (such as user using any suitable object or attachment such as finger, stylus in touch-sensitive table Operation on face 131 or near touch sensitive surface 131), and corresponding attachment device is driven according to preset formula.It is optional , touch sensitive surface 131 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus detection is used The touch orientation at family, and the signal that touch operation is brought is detected, transmit a signal to touch controller;Touch controller is from touch Touch information is received in detection device, and is converted into contact coordinate, then gives processor 180, and can receive processor 180 The order sent simultaneously is executed.Furthermore, it is possible to using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves Realize touch sensitive surface 131.In addition to touch sensitive surface 131, input unit 130 can also include other input equipments 132.Specifically, Other input equipments 132 can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), It is one or more in trace ball, mouse, operating lever etc..
Display unit 140 can be used for showing information input by user or the information and terminal 1000 that are supplied to user Various graphical user interface, these graphical user interface can be made of figure, text, icon, video and its arbitrary combination. Display unit 140 may include display panel 141, optionally, LCD (Liquid Crystal Display, liquid crystal may be used Show device), the forms such as OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) configure display panel 141.Further, touch sensitive surface 131 can cover display panel 141, when touch sensitive surface 131 detects on it or neighbouring touches After touching operation, processor 180 is sent to determine the type of touch event, is followed by subsequent processing type of the device 180 according to touch event Corresponding visual output is provided on display panel 141.Although in Fig. 10, touch sensitive surface 131 and display panel 141 are conducts Two independent components realize input and input function, but in some embodiments it is possible to by touch sensitive surface 131 and display Panel 141 is integrated and realizes and outputs and inputs function.
Terminal 1000 may also include at least one sensor 150, such as optical sensor, motion sensor and other sensings Device.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 141, and proximity sensor can close display when terminal 1000 is moved in one's ear Panel 141 and/or backlight.As a kind of motion sensor, gravity accelerometer can detect in all directions (generally Three axis) acceleration size, size and the direction of gravity are can detect that when static, can be used to identify mobile phone posture application (ratio Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);Extremely In other sensors such as gyroscope, barometer, hygrometer, thermometer, the infrared sensors that terminal 1000 can also configure, herein It repeats no more.
Voicefrequency circuit 160, loud speaker 161, microphone 162 can provide the audio interface between user and terminal 1000.Sound The transformed electric signal of the audio data received can be transferred to loud speaker 161, is converted to by loud speaker 161 by frequency circuit 160 Voice signal exports;On the other hand, the voice signal of collection is converted to electric signal by microphone 162, is received by voicefrequency circuit 160 After be converted to audio data, it is such as another to be sent to through RF circuits 110 then by after the processing of audio data output processor 180 Terminal, or audio data is exported to memory 120 to be further processed.Voicefrequency circuit 160 is also possible that earplug is inserted Hole, to provide the communication of peripheral hardware earphone and terminal 1000.
WiFi belongs to short range wireless transmission technology, and terminal 1000 can help user to receive and dispatch electricity by WiFi module 170 Sub- mail, browsing webpage and access streaming video etc., it has provided wireless broadband internet to the user and has accessed.Although Figure 10 shows Go out WiFi module 170, but it is understood that, and it is not belonging to must be configured into for terminal 1000, it completely can be according to need It to be omitted in the range for the essence for not changing invention.
Processor 180 is the control centre of terminal 1000, utilizes each portion of various interfaces and connection whole mobile phone Point, by running or execute the software program and/or module that are stored in memory 120, and calls and be stored in memory 120 Interior data execute the various functions and processing data of terminal 1000, to carry out integral monitoring to mobile phone.Optionally, it handles Device 180 may include one or more processing cores;Preferably, processor 180 can integrate application processor and modulation /demodulation processing Device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is mainly located Reason wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 180.
Terminal 1000 further includes the power supply 190 (such as battery) powered to all parts, it is preferred that power supply can pass through electricity Management system and processor 180 are logically contiguous, to realize management charging, electric discharge and power consumption by power-supply management system The functions such as management.Power supply 190 can also include one or more direct current or AC power, recharging system, power supply event Hinder the random components such as detection circuit, power supply changeover device or inverter, power supply status indicator.
Although being not shown, terminal 1000 can also include camera, bluetooth module etc., and details are not described herein.Specifically at this In embodiment, the display unit of terminal is touch-screen display, terminal further include have memory and one or more than one Program, one of them either more than one program be stored in memory and be configured to by one or more than one Device is managed to execute.The one or more programs include instructions for performing the following operations:
Receive the target video data that server sends group video session, the video display modes of target video data with The user type of video display modes matching indicated by the user type of terminal user, terminal user is ordinary user, commonly User is used to indicate terminal user and uses two dimensional mode when participating in group's video session;Display target video data, makes Ordinary user in group's video session shows that the Virtual User in group's video session is with two-dimensional virtual in the form of two-dimentional personage The form of personage is shown.
Figure 11 is a kind of block diagram of the device 1100 of group's video session provided in an embodiment of the present invention.For example, device 1100 may be provided as a server.Referring to Fig.1 1, device 1100 includes processing component 1122, further comprise one or Multiple processors, and by the memory resource representated by memory 1132, it can be by the execution of processing component 1122 for storing Instruction, such as application program.The application program stored in memory 1132 may include it is one or more each Corresponding to the module of one group of instruction.In addition, processing component 1122 is configured as executing instruction, serviced with executing in above-described embodiment The method of device side.
Device 1100 can also include that a power supply module 1126 be configured as the power management of executive device 1100, one Wired or wireless network interface 1150 is configured as device 1100 being connected to network and input and output (I/O) interface 1158.Device 1100 can be operated based on the operating system for being stored in memory 1132, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTMOr it is similar.
One of ordinary skill in the art will appreciate that realizing that all or part of step of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can be stored in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all the present invention spirit and Within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention.

Claims (20)

1. a kind of method of group's video session, which is characterized in that it is applied to server, the method includes:
Create group's video session;
The use of the user is determined according to the facility information of the user for each user in group's video session Family type, the user type include ordinary user and Virtual User, and the ordinary user is used to indicate the user and is participating in Two dimensional mode, the Virtual User is used to be used to indicate the user and participating in the group when group's video session Virtual reality display pattern is used when video session;
According to the video display modes indicated by the user type of the user, to the video data of group's video session into Row processing obtains the target video data of the user, and the video display modes of the target video data are with the user's Video display modes matching indicated by user type;
During the progress of group's video session, target video data is sent to the user equipment of the user, makes institute It states user and carries out group's video session.
2. according to the method described in claim 1, it is characterized in that, regarding indicated by the user type according to the user Frequency display pattern handles the video data of group's video session, obtains the target video data packet of the user It includes:
It is if the user type of the user is ordinary user, Virtual User in group's video session is corresponding three-dimensional empty Anthropomorphic object is converted to two-dimensional virtual personage;
To the two-dimensional background and the corresponding audio of the Virtual User of the two-dimensional virtual personage, Virtual User selection Data are synthesized, and the first two-dimensional video data is obtained;
At least one first two-dimensional video data and at least one second two-dimensional video data are synthesized, the user is obtained Target video data, second two-dimensional video data refers to the two-dimensional video number of ordinary user in group's video session According to.
3. according to the method described in claim 1, it is characterized in that, regarding indicated by the user type according to the user Frequency display pattern handles the video data of group's video session, obtains the target video data packet of the user It includes:
If the user type of the user is Virtual User, the corresponding virtual environment of group's video session is determined;
Using the virtual environment as three-dimensional background, determine each user in group's video session in the virtual environment Display location;
For the ordinary user in group's video session, by the designated Data Synthesis of the ordinary user to described general The corresponding display location in general family;
For the Virtual User in group's video session, the three-dimensional personage of the Virtual User and audio data are closed At the extremely corresponding display location of the Virtual User;
Using the video data after synthesis as the target video data of the user.
4. according to the method described in claim 3, it is characterized in that, the common use in group's video session Family, before the designated Data Synthesis of the ordinary user to the corresponding display location of the ordinary user, the method Further include:
If the ordinary user includes the first ordinary user, the two-way two-dimensional video data of first ordinary user is converted For the first three dimensional video data, commonly used first three dimensional video data as the designated data, described first Family refers to the ordinary user using binocular camera, or, if the ordinary user includes first ordinary user, it will be described The two-way two-dimensional video data of first ordinary user is as the designated data;
If the ordinary user includes the second ordinary user, using the two-dimensional video data of second ordinary user as described in Designated data, second ordinary user refer to the ordinary user using monocular cam.
5. according to the method described in claim 3, it is characterized in that, the corresponding virtual ring of determination group's video session Border includes:
The corresponding virtual environment of virtual environment option that the user triggers is determined as the user in group's video council Corresponding virtual environment in words;Or,
According to the number of users in group's video session, the appearance of the corresponding virtual environment of group's video session is determined Amount, is determined as the corresponding virtual environment of group's video session by the virtual environment for meeting the capacity;Or,
The virtual environment that each user in group's video session selected is analyzed, being selected for each virtual environment is obtained Number will be determined as the corresponding virtual environment of group's video session by the most virtual environment of selection number.
6. according to the method described in claim 3, it is characterized in that, each user in determination group's video session Display location in the virtual environment includes:
According to the social data between other users in the user and group's video session, analyze the user with it is described Cohesion between other users arranges the other users according to cohesion sequence since the either side of the user Display location;Or,
The opposite side of the user is determined as user's body in the other users by the user identity for obtaining the other users The display location of the highest user of part, and the display location of remaining users in the other users is determined at random;Or,
The chronological order of group's video session is added according to the other users, since the either side of the user Arrange the display location of the other users;Or,
It is the use by the selected location determination of the user according to the position that the user selects in the virtual environment Display location of the family in the virtual environment;Or,
The opposite side of the user is determined as to the display location of the ordinary user, and is determined in the other users at random The display location of remaining users.
7. a kind of method of group's video session, which is characterized in that it is applied to terminal, the method includes:
Receive the target video data that server sends group video session, the video display modes of the target video data with The user type of video display modes matching indicated by the user type of terminal user, the terminal user is ordinary user, The ordinary user is used to indicate the terminal user and uses two dimensional mode when participating in group's video session;
It shows the target video data, so that the ordinary user in group's video session is shown in the form of two-dimentional personage, the group Virtual User in group video session is shown in the form of two-dimensional virtual personage.
8. a kind of method of group's video session, which is characterized in that it is applied to Virtual Reality equipment, the method includes:
Receive the target video data that server sends group video session, the video display modes of the target video data with The user type of video display modes matching indicated by the user type of VR equipment users, the VR equipment users is virtual uses Family, the Virtual User are used to indicate the VR equipment users and are shown using virtual reality when participating in group's video session Pattern;
It shows the target video data, makes the ordinary user in group's video session in virtual environment with two-dimentional personage or three The form of dimension personage shows that the Virtual User in group's video session is in the virtual environment with three-dimensional personage's Form is shown.
9. according to the method described in claim 8, it is characterized in that, the display target video data includes:
On the corresponding display location of the ordinary user, two-dimentional personage or the three dimensional character of the ordinary user are shown;
On the corresponding display location of the Virtual User, the three-dimensional personage of the Virtual User is shown.
10. according to the method described in claim 8, it is characterized in that, the method further includes:
Based on the target video data, if detecting that any user is being made a speech in group's video session, described Speech prompt is shown on the corresponding display location of user.
11. a kind of device of group's video session, which is characterized in that described device includes:
Creation module, for creating group's video session;
Determining module, for being determined according to the facility information of the user for each user in group's video session The user type of the user, the user type include ordinary user and Virtual User, and the ordinary user is used to indicate institute State user uses two dimensional mode, the Virtual User to be used to indicate the user and exist when participating in group's video session It participates in using virtual reality display pattern when group's video session;
Processing module, for the video display modes indicated by the user type according to the user, to group's video council The video data of words is handled, and the target video data of the user is obtained, and the video of the target video data shows mould Formula is matched with the video display modes indicated by the user type of the user;
Sending module, for during the progress of group's video session, target to be sent to the user equipment of the user Video data makes the user carry out group's video session.
12. according to the devices described in claim 11, which is characterized in that the processing module is used for:
It is if the user type of the user is ordinary user, Virtual User in group's video session is corresponding three-dimensional empty Anthropomorphic object is converted to two-dimensional virtual personage;
To the two-dimensional background and the corresponding audio of the Virtual User of the two-dimensional virtual personage, Virtual User selection Data are synthesized, and the first two-dimensional video data is obtained;
At least one first two-dimensional video data and at least one second two-dimensional video data are synthesized, the user is obtained Target video data, second two-dimensional video data refers to the two-dimensional video number of ordinary user in group's video session According to.
13. according to the devices described in claim 11, which is characterized in that the processing module is used for:
If the user type of the user is Virtual User, the corresponding virtual environment of group's video session is determined;
Using the virtual environment as three-dimensional background, determine each user in group's video session in the virtual environment Display location;
For the ordinary user in group's video session, by the designated Data Synthesis of the ordinary user to described general The corresponding display location in general family;
For the Virtual User in group's video session, the three-dimensional personage of the Virtual User and audio data are closed At the extremely corresponding display location of the Virtual User;
Using the video data after synthesis as the target video data of the user.
14. device according to claim 13, which is characterized in that processing module is additionally operable to:
If the ordinary user includes the first ordinary user, the two-way two-dimensional video data of first ordinary user is converted For the first three dimensional video data, commonly used first three dimensional video data as the designated data, described first Family refers to the ordinary user using binocular camera, or, if the ordinary user includes first ordinary user, it will be described The two-way two-dimensional video data of first ordinary user is as the designated data;
If the ordinary user includes the second ordinary user, using the two-dimensional video data of second ordinary user as described in Designated data, second ordinary user refer to the ordinary user using monocular cam.
15. device according to claim 13, which is characterized in that
The processing module is used for:The corresponding virtual environment of virtual environment option that the user triggers is determined as the user The corresponding virtual environment in group's video session;Or,
The processing module is used for:According to the number of users in group's video session, group's video session pair is determined The virtual environment for meeting the capacity is determined as the corresponding virtual ring of group's video session by the capacity for the virtual environment answered Border;Or,
The processing module is used for:The virtual environment that each user in group's video session selected is analyzed, is obtained every A virtual environment by selection number, it is corresponding by group's video session is determined as by the most virtual environment of selection number Virtual environment.
16. device according to claim 13, which is characterized in that
The processing module is used for:According to the social data between other users in the user and group's video session, The cohesion between the user and the other users is analyzed, is opened from the either side of the user according to cohesion sequence The display location for the arrangement other users that begin;Or,
The processing module is used for:The opposite side of the user is determined as institute by the user identity for obtaining the other users It states the display location of the highest user of user identity in other users, and determines the aobvious of remaining users in the other users at random Show position;Or,
The processing module is used for:The chronological order of group's video session is added according to the other users, from institute The either side for stating user starts to arrange the display location of the other users;Or,
The processing module is used for:According to the position that the user selects in the virtual environment, selected by the user Location determination be display location of the user in the virtual environment;Or,
The processing module is used for:The opposite side of the user is determined as to the display location of the ordinary user, and random Determine the display location of remaining users in the other users.
17. a kind of device of group's video session, which is characterized in that described device includes:
Receiving module sends the target video data of group's video session for receiving server, the target video data Video display modes are matched with the video display modes indicated by the user type of terminal user, the user class of the terminal user Type is ordinary user, and the ordinary user is used to indicate the terminal user when participating in group's video session using two dimension Display pattern;
Display module makes the ordinary user in group's video session with two-dimentional personage's shape for showing the target video data Formula shows that the Virtual User in group's video session is shown in the form of two-dimensional virtual personage.
18. a kind of device of group's video session, which is characterized in that described device includes:
Receiving module sends the target video data of group's video session for receiving server, the target video data Video display modes are matched with the video display modes indicated by the user type of VR equipment users, the use of the VR equipment users Family type is Virtual User, and the Virtual User is used to indicate the VR equipment users and is adopted when participating in group's video session With virtual reality display pattern;
Display module makes the ordinary user in group's video session in virtual environment for showing the target video data Shown in the form of two-dimentional personage or three dimensional character, the Virtual User in group's video session in the virtual environment with The form of three-dimensional personage is shown.
19. device according to claim 18, which is characterized in that the display module is used for:
On the corresponding display location of the ordinary user, two-dimentional personage or the three dimensional character of the ordinary user are shown;
On the corresponding display location of the Virtual User, the three-dimensional personage of the Virtual User is shown.
20. device according to claim 18, which is characterized in that the display module is additionally operable to:
Based on the target video data, if detecting that any user is being made a speech in group's video session, described Speech prompt is shown on the corresponding display location of user.
CN201710104439.2A 2017-02-24 2017-02-24 Method and device for group video session Active CN108513088B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201710104439.2A CN108513088B (en) 2017-02-24 2017-02-24 Method and device for group video session
PCT/CN2018/075749 WO2018153267A1 (en) 2017-02-24 2018-02-08 Group video session method and network device
TW107106428A TWI650675B (en) 2017-02-24 2018-02-26 Method and system for group video session, terminal, virtual reality device and network device
US16/435,733 US10609334B2 (en) 2017-02-24 2019-06-10 Group video communication method and network device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710104439.2A CN108513088B (en) 2017-02-24 2017-02-24 Method and device for group video session

Publications (2)

Publication Number Publication Date
CN108513088A true CN108513088A (en) 2018-09-07
CN108513088B CN108513088B (en) 2020-12-01

Family

ID=63372785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710104439.2A Active CN108513088B (en) 2017-02-24 2017-02-24 Method and device for group video session

Country Status (1)

Country Link
CN (1) CN108513088B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035250A (en) * 2019-03-29 2019-07-19 维沃移动通信有限公司 Audio-frequency processing method, processing equipment, terminal and computer readable storage medium
CN112312062A (en) * 2020-10-30 2021-02-02 上海境腾信息科技有限公司 3D display method, storage medium and terminal equipment for multi-person conference recording and playback
CN112565057A (en) * 2020-11-13 2021-03-26 广州市百果园网络科技有限公司 Voice chat room service method and device capable of expanding business
CN113099159A (en) * 2021-03-26 2021-07-09 上海电气集团股份有限公司 Control method and device for teleconference
CN114079803A (en) * 2020-08-21 2022-02-22 上海昊骇信息科技有限公司 Music live broadcast method and system based on virtual reality
CN114882972A (en) * 2022-04-13 2022-08-09 江苏医药职业学院 Old people rehabilitation exercise system and method based on virtual reality

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102164265A (en) * 2011-05-23 2011-08-24 宇龙计算机通信科技(深圳)有限公司 Method and system of three-dimensional video call
CN103238317A (en) * 2010-05-12 2013-08-07 布鲁珍视网络有限公司 Systems and methods for scalable distributed global infrastructure for real-time multimedia communication
US20140085406A1 (en) * 2012-09-27 2014-03-27 Avaya Inc. Integrated conference floor control
CN105721821A (en) * 2016-04-01 2016-06-29 宇龙计算机通信科技(深圳)有限公司 Video calling method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103238317A (en) * 2010-05-12 2013-08-07 布鲁珍视网络有限公司 Systems and methods for scalable distributed global infrastructure for real-time multimedia communication
CN102164265A (en) * 2011-05-23 2011-08-24 宇龙计算机通信科技(深圳)有限公司 Method and system of three-dimensional video call
US20140085406A1 (en) * 2012-09-27 2014-03-27 Avaya Inc. Integrated conference floor control
CN105721821A (en) * 2016-04-01 2016-06-29 宇龙计算机通信科技(深圳)有限公司 Video calling method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035250A (en) * 2019-03-29 2019-07-19 维沃移动通信有限公司 Audio-frequency processing method, processing equipment, terminal and computer readable storage medium
CN114079803A (en) * 2020-08-21 2022-02-22 上海昊骇信息科技有限公司 Music live broadcast method and system based on virtual reality
CN112312062A (en) * 2020-10-30 2021-02-02 上海境腾信息科技有限公司 3D display method, storage medium and terminal equipment for multi-person conference recording and playback
CN112565057A (en) * 2020-11-13 2021-03-26 广州市百果园网络科技有限公司 Voice chat room service method and device capable of expanding business
CN112565057B (en) * 2020-11-13 2022-09-23 广州市百果园网络科技有限公司 Voice chat room service method and device capable of expanding business
CN113099159A (en) * 2021-03-26 2021-07-09 上海电气集团股份有限公司 Control method and device for teleconference
CN114882972A (en) * 2022-04-13 2022-08-09 江苏医药职业学院 Old people rehabilitation exercise system and method based on virtual reality

Also Published As

Publication number Publication date
CN108513088B (en) 2020-12-01

Similar Documents

Publication Publication Date Title
CN109391792B (en) Video communication method, device, terminal and computer readable storage medium
CN108513088A (en) The method and device of group's video session
CN108234276B (en) Method, terminal and system for interaction between virtual images
CN105208458B (en) Virtual screen methods of exhibiting and device
US20190221045A1 (en) Interaction method between user terminals, terminal, server, system, and storage medium
CN106454404B (en) A kind of methods, devices and systems playing live video
US9779527B2 (en) Method, terminal device and storage medium for processing image
CN108307140B (en) Network call method, device and computer readable storage medium
CN107370656B (en) Instant messaging method and device
CN107580143B (en) A kind of display methods and mobile terminal
CN104618217B (en) Share method, terminal, server and the system of resource
CN106488296B (en) A kind of method and apparatus showing video barrage
CN109194818A (en) A kind of information processing method and terminal
CN108876878B (en) Head portrait generation method and device
CN103886198B (en) Method, terminal, server and the system that a kind of data process
CN107977272A (en) The method and device of application operation
CN110213440A (en) A kind of images share method and terminal
CN110166439A (en) Collaborative share method, terminal, router and server
CN109639569A (en) A kind of social communication method and terminal
CN104660769B (en) A kind of methods, devices and systems for adding associated person information
CN106533917B (en) Relation chain processing method, apparatus and system
CN108228033A (en) A kind of message display method and mobile terminal
CN108880974B (en) Session group creation method and device
CN109803110A (en) A kind of image processing method, terminal device and server
CN109982273A (en) A kind of information replying method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant