CN114679437A - Teleconference method, data interaction method, device, and computer storage medium - Google Patents

Teleconference method, data interaction method, device, and computer storage medium Download PDF

Info

Publication number
CN114679437A
CN114679437A CN202210238594.4A CN202210238594A CN114679437A CN 114679437 A CN114679437 A CN 114679437A CN 202210238594 A CN202210238594 A CN 202210238594A CN 114679437 A CN114679437 A CN 114679437A
Authority
CN
China
Prior art keywords
information
teleconference
conference
data
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210238594.4A
Other languages
Chinese (zh)
Inventor
章佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210238594.4A priority Critical patent/CN114679437A/en
Publication of CN114679437A publication Critical patent/CN114679437A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • H04L65/4038Arrangements for multi-party communication, e.g. for conferences with floor control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/452Remote windowing, e.g. X-Window System, desktop virtualisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1822Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention provides a teleconferencing method, a data interaction method, equipment and a computer storage medium. The teleconferencing method comprises: acquiring eye movement information of participants participating in a teleconference; generating interactive information corresponding to the teleconference based on the eye movement information; and performing an interactive operation corresponding to the teleconference based on the interactive information to assist the teleconference. According to the technical scheme, the eye movement information of the participants participating in the teleconference is acquired, then the interactive information corresponding to the teleconference is generated based on the eye movement information, the interactive operation corresponding to the teleconference is executed based on the interactive information, and the fact that the speaker can carry out natural interactive operation with other participants through the eye is effectively achieved, so that the rhythm of the teleconference is more natural and efficient, the quality and the effect of conference information transmission are guaranteed, and the practicability of the method is further improved.

Description

Teleconference method, data interaction method, device, and computer storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a teleconference method, a data interaction method, a device, and a computer storage medium.
Background
The remote conference is a conference which is held across areas by using a modern communication means. The existing teleconference is mostly realized at a Personal Computer (PC) end, and because the conference content is often required to be displayed in the interface of the teleconference, the speaker and other personnel lack natural interaction between people, so that the break of the rhythm of the conference and the break of information transmission easily occur, and the transmission quality and effect of the conference are reduced.
Disclosure of Invention
The embodiment of the invention provides a teleconference method, a data interaction method, equipment and a computer storage medium, which realize natural interaction between a speaker and participants through the eye spirit, thereby ensuring that the rhythm of a teleconference is more real and reliable and ensuring the quality and effect of conference information transmission.
In a first aspect, an embodiment of the present invention provides a remote conference method, including:
acquiring eye movement information of participants participating in a teleconference;
generating interaction information corresponding to the teleconference based on the eye movement information;
and performing an interactive operation corresponding to the teleconference based on the interactive information to assist the teleconference.
In a second aspect, an embodiment of the present invention provides a remote conference apparatus, including:
the first acquisition module is used for acquiring eye movement information of participants participating in the teleconference;
a first generating module, configured to generate interaction information corresponding to the teleconference based on the eye movement information;
and the first processing module is used for executing interactive operation corresponding to the remote conference based on the interactive information so as to assist the remote conference in proceeding.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the teleconferencing method of the first aspect as described above.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to make a computer implement the teleconferencing method in the first aspect when executed.
In a fifth aspect, an embodiment of the present invention provides a computer program product, including: computer program, which, when executed by a processor of an electronic device, causes the processor to carry out the steps of the teleconferencing method as described above in relation to the first aspect.
In a sixth aspect, an embodiment of the present invention provides a data interaction method, including:
acquiring data to be processed displayed in a display interface;
determining eye movement information which is respectively checked by a plurality of persons on the data to be processed;
generating interactive information corresponding to the data to be processed based on the eye movement information;
and performing labeling display on the data to be processed based on the interaction information.
In a seventh aspect, an embodiment of the present invention provides a data interaction apparatus, including:
the second acquisition module is used for acquiring the data to be processed displayed in the display interface;
the second determining module is used for determining eye movement information of a plurality of persons for respectively checking the data to be processed;
the second generation module is used for generating interactive information corresponding to the data to be processed based on the eye movement information;
and the second processing module is used for performing label display on the data to be processed based on the interactive information.
In an eighth aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the data interaction method in the sixth aspect.
In a ninth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to make a computer implement the data interaction method in the above sixth aspect when executed.
In a tenth aspect, an embodiment of the present invention provides a computer program product, including: a computer program, which, when executed by a processor of an electronic device, causes the processor to perform the steps of the data interaction method of the sixth aspect.
In an eleventh aspect, an embodiment of the present invention provides a data interaction method, including:
acquiring a preset virtual space generated by a virtual reality technology;
determining eye movement information corresponding to each person participating in a preset virtual space;
generating interactive information corresponding to the preset virtual space based on the eye movement information;
and executing interactive operation corresponding to the preset virtual space based on the interactive information.
In a twelfth aspect, an embodiment of the present invention provides a data interaction apparatus, including:
the third acquisition module is used for acquiring a preset virtual space generated by a virtual reality technology;
the third determining module is used for determining eye movement information corresponding to each person participating in the preset virtual space;
A third generating module, configured to generate interaction information corresponding to the preset virtual space based on the eye movement information;
and the third processing module is used for executing interactive operation corresponding to the preset virtual space based on the interactive information.
In a thirteenth aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the data interaction method in the eleventh aspect.
In a fourteenth aspect, an embodiment of the present invention provides a computer storage medium, which is used for storing a computer program, and the computer program enables a computer to implement the data interaction method in the eleventh aspect when executed.
In a fifteenth aspect, an embodiment of the present invention provides a computer program product, including: a computer program, which, when executed by a processor of an electronic device, causes the processor to perform the steps of the data interaction method according to the eleventh aspect.
In a sixteenth aspect, an embodiment of the present invention provides a teleconference method, including:
Acquiring conference permission information of each participating terminal in a teleconference;
determining conference data which can be displayed by each conference participating terminal based on the conference permission information, wherein the conference data comprises at least one part of a display file which needs to be displayed in a remote conference;
and displaying the corresponding conference data by using the conference-participating terminal.
In a seventeenth aspect, an embodiment of the present invention provides a remote conference apparatus, including:
the fourth acquisition module is used for acquiring the conference permission information of each participating terminal in the teleconference;
a fourth determining module, configured to determine, based on the conference permission information, conference data that can be displayed by each participating terminal, where the conference data includes at least a part of a display file that needs to be displayed in a teleconference;
and the fourth processing module is used for displaying the corresponding conference data by using the conference-participating terminal.
In an eighteenth aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the teleconferencing method of the sixteenth aspect as described above.
In a nineteenth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is configured to enable a computer to implement the teleconferencing method in the sixteenth aspect when executed.
In a twentieth aspect, an embodiment of the present invention provides a computer program product, including: a computer program which, when executed by a processor of an electronic device, causes the processor to perform the steps of the teleconferencing method as described in the sixteenth aspect above.
According to the technical scheme, the eye movement information of the participants participating in the teleconference is acquired, then the interaction information corresponding to the teleconference is generated based on the eye movement information, the interaction operation corresponding to the teleconference is executed based on the interaction information, and the fact that the speaker can carry out natural interaction operation with other participants through the eye is effectively achieved, so that the rhythm of the teleconference is natural and efficient, the quality and the effect of conference information transmission are guaranteed, and the practicability of the method is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of a remote conference method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a teleconference method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of generating interaction information corresponding to the teleconference based on the eye movement information according to the embodiment of the present invention;
fig. 4 is a first schematic diagram illustrating an operating mode of a teleconference, according to an embodiment of the present invention, being a session mode;
fig. 5 is a second schematic diagram illustrating an operating mode of a teleconference in a session mode according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating an operation mode of a teleconference in a speech mode according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an operation mode of a teleconference, which is provided in an embodiment of the present invention, being a discussion mode;
fig. 8 is a schematic flowchart of another teleconference method according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating labeled statistical information according to an embodiment of the present invention;
fig. 10 is a schematic flowchart of another remote conference method according to an embodiment of the present invention;
fig. 11 is a schematic diagram of generating a communication group corresponding to the hot spot information of interest according to an embodiment of the present invention;
fig. 12 is a schematic flowchart of a teleconference method according to an embodiment of the present invention;
Fig. 13 is a schematic flowchart of a remote conference method according to an embodiment of the present invention;
fig. 14 is a schematic flowchart of a data interaction method according to an embodiment of the present invention;
FIG. 15 is a flowchart illustrating another data interaction method according to an embodiment of the present invention;
fig. 16 is a schematic diagram of another teleconference method according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of a remote conference apparatus according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of an electronic device corresponding to the teleconference device provided in the embodiment shown in fig. 17;
fig. 19 is a schematic structural diagram of a data interaction apparatus according to an embodiment of the present invention;
FIG. 20 is a schematic structural diagram of an electronic device corresponding to the data interaction apparatus provided in the embodiment shown in FIG. 19;
FIG. 21 is a schematic structural diagram of another data interaction device according to an embodiment of the present invention;
fig. 22 is a schematic structural diagram of an electronic device corresponding to the data interaction apparatus provided in the embodiment shown in fig. 21;
fig. 23 is a schematic structural diagram of another remote conference device according to an embodiment of the present invention;
fig. 24 is a schematic structural diagram of an electronic device corresponding to the teleconference device provided in the embodiment shown in fig. 23.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any inventive step, are intended to be protected by the present invention but do not exclude at least one. It is to be understood that the term "and/or" range "is used herein.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plural" generally includes at least two, "only one describes an associated relationship of the associated objects, meaning that there may be three relationships, e.g., A and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of additional like elements in the article of commerce or system in which the element is comprised.
In order to facilitate understanding of specific implementation processes and effects of the technical solutions in the present application, the following briefly describes related technologies:
The remote conference is a conference which is held across areas by using a modern communication means. With the rapid development of science and technology, in order to avoid the situation of low working efficiency caused by different office places, the teleconference becomes a necessary implementation mode for office work.
The existing teleconference is mostly realized at a Personal Computer (PC) end, and because conference contents are often required to be displayed in an interface of the teleconference, natural interactions between people, such as body language, eye contact, expression transmission and the like, between a speaker and other personnel are lacked, for example: the speaker cannot interact with other people instantly and does not know whether the speaker is listening to or speaking the conference. Or, the speaker can only control or edit the conference content through a mouse, a keyboard, or other devices, for example: when a display operation such as enlargement or reduction is performed on the display content, the operation is likely to be disturbed when the mouse is used to perform a movement or click operation on the display content.
Further, for the conference participants, when asking questions to the speaker, it is easy to describe which specific position of the conference content the question corresponds to, which may reduce the quality and efficiency of communication. In the process of meeting, when meeting documents need to be rapidly demonstrated, meeting participants can easily not know the specific position where a speaker speaks.
Based on the description, the current remote conference is easy to have the problems of harsh conference rhythm and information transmission loss, thereby reducing the quality and effect of the conference. In order to solve the above technical problems, embodiments provide a teleconference method, a data interaction method, a device, and a computer storage medium, where an execution main body of the teleconference method is a teleconference device, and the teleconference device may be implemented as a mobile phone, a tablet computer, a personal computer PC, a conference room device, a head-mounted display device implemented by Augmented Reality (AR), Virtual Reality (VR), or Mixed Reality (MR) technology, and other devices capable of performing teleconference operation. In particular, reference is made to fig. 1:
the teleconference device is used for acquiring eye movement information of participants participating in the teleconference, wherein each participant can participate in the teleconference through the corresponding teleconference device, for example: the teleconference device that each participant corresponds can include teleconference device a, teleconference device b and teleconference device c, and above-mentioned teleconference device a, teleconference device b and teleconference device c can carry out communication connection through presetting the network and the teleconference device that the speaker corresponds, and it can wireless network or wired network to preset the network to realized that teleconference device, teleconference device a, teleconference device b and teleconference device c can carry out same teleconference.
The teleconference can be a real teleconference or a virtual teleconference, and the real teleconference can refer to a teleconference formed in a real space, for example, a teleconference device corresponding to a speaker is located in beijing, a teleconference device a corresponding to a participant is located in shanghai, a teleconference device b corresponding to the participant is located in shenzhen, a teleconference device c corresponding to the participant is located in guangzhou, and a teleconference can be directly generated by presetting a network and a conference application program in each teleconference device The image acquisition information of the speaker and/or the image acquisition information of other participants.
A virtual teleconference may refer to a teleconference formed in a virtual space, and when the teleconference is a virtual teleconference, the teleconferencing apparatus in this case may preferably be a head-mounted display device implemented by AR, VR or MR technology, in particular, the teleconferencing apparatus corresponding to the speaker is located in beijing, the remote conference device a corresponding to the conference participants is positioned in Beijing or Shanghai, the remote conference device b corresponding to the conference participants is positioned in Beijing or Shenzhen, the remote conference device c corresponding to the conference participants is positioned in Beijing or Guangzhou, a virtual meeting room can be directly generated through the preset network and the meeting application programs in each remote meeting device, in the virtual conference room, a virtual speaker corresponding to the teleconference device and virtual participants corresponding to each of the teleconference device a, the teleconference device b, and the teleconference device c can be generated.
In order to ensure the conference quality and effect of the teleconference, the eye movement information of the participants participating in the teleconference can be obtained in real time, wherein the number of the participants can be one or more, and when the number of the participants is multiple, the obtained eye movement information is also multiple. Specifically, the eye movement information may include the movement amplitude (or movement distance) of the eyeball, the movement direction (up, down, left, right, etc.), the type of movement (blinking, eye closure, etc.), the number of movements, and the like.
After the eye movement information is acquired, the eye movement information may be analyzed to generate interaction information corresponding to the teleconference, and the interaction information may include at least one of: interactive information for moving display data up, interactive information for moving display data down, interactive information for enlarging display data, interactive information for marking display data, interactive information for performing an instant conversation operation to conference participants, and the like. After the interaction information is acquired, an interaction operation corresponding to the teleconference may be performed based on the interaction information, for example: the display data can be adjusted based on the interactive information or the conversation operation between the speaker and the participants can be realized based on the interactive information, so that the teleconference can be effectively assisted to be smoothly and efficiently carried out.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The features of the embodiments and examples described below can be combined with or separated from each other without conflict between the embodiments. In addition, the sequence of steps in the embodiments of the methods described below is merely an example, and is not strictly limited.
Fig. 2 is a schematic flowchart of a remote conference method according to an embodiment of the present invention; referring to fig. 2, the embodiment provides a teleconference method, an execution main body of the method may be a teleconference device, the teleconference device may be implemented as software, or a combination of software and hardware, and specifically, when the teleconference device is implemented as hardware, the teleconference device may be implemented as various electronic devices having a display screen and an eye movement tracking device, including but not limited to a smart phone, a tablet computer, a personal computer PC, a conference room display device, a head-mounted display device implemented by Augmented Reality (AR), Virtual Reality (VR) or Mixed Reality (MR) technology, and the like. When the teleconference device is implemented as software, it can be installed in the electronic apparatus exemplified above. Based on the above remote conference apparatus, the remote conference method in this embodiment may include the following steps:
Step S201: and acquiring eye movement information of the participants participating in the teleconference.
Step S202: based on the eye movement information, interaction information corresponding to the teleconference is generated.
Step S203: and performing an interactive operation corresponding to the teleconference based on the interactive information to assist the teleconference in proceeding.
The following is a detailed description of specific implementation processes and implementation effects of the above steps:
step S201: and acquiring eye movement information of the participants participating in the teleconference.
In order to ensure the conference quality and effect of the teleconference, the eye movement information of the participants participating in the teleconference can be acquired in real time, wherein the number of the participants can be one or more, and when the number of the participants is more, the acquired eye movement information is also more. The eye movement information is used for representing the action record of the user's eyeball when the user faces the preset interface, and may specifically include the movement amplitude (or movement distance) of the eyeball, a movement path, a movement direction (up, down, left, right, and the like), a movement type (blinking, eye closing, and the like), a movement number, a stay time in the preset area, eye gaze angle information, and the like. Specifically, the eye movement information may be obtained by continuously capturing and recording the eyeball movement of the user by an eye tracker and/or an image capture device, for example: the eye tracker and the image acquisition device can capture the gazing action, the eye jumping action and the following action of eyes, the spatial information and the time information of various actions and the like.
For example 1, one or more image capturing devices are configured on the teleconference device, and when the teleconference device is a conference room display device, because one or more users corresponding to the conference room display device may be provided, in order to accurately acquire eye movement information of each of the users, the teleconference device may be provided with a plurality of image capturing devices, and the plurality of image capturing devices may be uniformly distributed around the teleconference device. After the remote conference device presents the display interface to the users, the eye movement information of each user can be captured and recorded by the self-configured image acquisition device(s) and the eye movement tracking technology, so that the eye movement information of the conference participants participating in the remote conference can be obtained.
For example 2, the teleconference device is configured with one or more eye tracker, and when the teleconference device is a smart phone, a tablet computer, or a personal computer PC, since there may be one user corresponding to the conference room display device, in order to accurately obtain eye movement information of the user, the teleconference device may be provided with one or more eye tracker, and when there are a plurality of eye trackers, the plurality of eye trackers may be uniformly distributed around the teleconference device. After the remote conference device presents the display interface to the user, the eye movement information of the conference participants participating in the remote conference can be obtained by the self-configured eye movement instrument(s) and by capturing and recording the eye movement information of the user by using the eye movement tracking technology.
Step S202: based on the eye movement information, interaction information corresponding to the teleconference is generated.
Since the eye movement information includes information such as the movement amplitude, the movement path, the movement direction (up, down, left, right, etc.), the movement type (blinking, eye closing, etc.), the movement times, the staying time in the preset area, the eye gaze angle information, etc., of the eyeball, different eye movement information may correspond to different interaction information, for example: when the eye movement information is upward movement of the eyeball and the movement amplitude is larger than a preset threshold value, generating interactive information for realizing upward movement of the display data; when the eye movement information includes the stay time in the preset area and the stay time is greater than or equal to the preset threshold, interactive information for implementing marking operation on the display data or interactive information for implementing instant communication with the conference participants can be generated. Therefore, in order to accurately assist the teleconference in normal and efficient operation, after the eye movement information is acquired, the eye movement information may be analyzed to generate interaction information corresponding to the teleconference. In some examples, a mapping relationship between the eye movement information and the interactive information is preconfigured, after the eye movement information is acquired, the prestored mapping relationship can be acquired by accessing the preset area, and then the interactive information corresponding to the teleconference is generated based on the eye movement information and the mapping relationship, so that the interactive information can be generated quickly and accurately.
Step S203: and performing an interactive operation corresponding to the teleconference based on the interactive information to assist the teleconference.
After generating the interaction information corresponding to the teleconference, an interaction operation corresponding to the teleconference may be performed based on the interaction information, such as: if the interactive information is used for realizing the marking operation of the display data, the marking operation can be carried out on the display data based on the interactive information; when the interactive information is used for realizing the instant communication with the participants, the instant communication between the speaker and the participants can be realized based on the interactive information, and the like, so that the normal operation of the remote conference can be effectively and stably assisted.
In some examples, to improve the security reliability of the teleconference, before obtaining the eye movement information of the conference participants participating in the teleconference, the method further includes: acquiring identity information of participants participating in a teleconference; identity authentication and authority verification operations are carried out on the conference participants based on the identity information, so that whether the conference participants are legal participants participating in the teleconference is determined, when the conference participants are legal participants, the authority information of the conference participants can be determined based on the identity information, and different conference display data can be correspondingly provided by different authority information; when the participators are illegal persons, the participators can be prohibited from participating in the remote conference.
Specifically, in order to accurately enable conference participants with different permissions to view data matched with the permissions, the method in this embodiment may further include: acquiring conference permission information of each conference participating terminal in a teleconference; determining conference data which can be displayed by each conference participating terminal based on the conference permission information, wherein the conference data comprises at least one part of a display file which needs to be displayed in a remote conference; and displaying the corresponding conference data by using the conference-participating terminal.
Determining conference data that can be displayed by each participating terminal based on the conference permission information may include: in the display file, determining the in-authority data and the out-authority data corresponding to each participating terminal based on the conference authority information; processing the data outside the authority to obtain processed data which cannot be checked; and acquiring conference data which can be displayed by each conference participating terminal based on the processed data and the data in the authority.
According to the teleconference method provided by the embodiment, the eye movement information of the participants participating in the teleconference is acquired, the interactive information corresponding to the teleconference is generated based on the eye movement information, and the interactive operation corresponding to the teleconference is executed based on the interactive information, so that the fact that the speaker can carry out natural interactive operation with other participants through the eye spirit is effectively achieved, the rhythm of the teleconference is more natural and efficient, the quality and the effect of conference information transmission are guaranteed, and the practicability of the method is further improved.
Fig. 3 is a schematic flowchart of a process of generating interaction information corresponding to a teleconference based on eye movement information according to an embodiment of the present invention; in addition to generating the interaction information corresponding to the teleconference based on the pre-configured mapping relationship between the eye movement information and the interaction information, there are other implementation manners for generating the interaction information, specifically, as shown in fig. 3, the generating the interaction information corresponding to the teleconference based on the eye movement information of the present embodiment may include:
step S301: an operational mode of the teleconference is determined.
When a teleconference is generated by using a teleconference device, the generated teleconference may correspond to different operation modes, and in some examples, the operation modes may include any one of the following: speech mode, discussion mode, conversation mode, and the like. Specifically, when the teleconference is in the speech mode, at this time, the person who is allowed to sound in the teleconference may be a speaker, and the other participants are prohibited from sound production, and information (document information, image information, video information, and the like) related to the teleconference and image information of each participant may be displayed in the display interface of the teleconference device corresponding to the speaker, specifically, the conference related information may be displayed in the middle of the display interface, and the image acquisition information of each participant may be displayed at the upper end, the lower end, the left end, or the right end of the display interface, it can be understood that, when the number of participants is large, the display size of the image information of each participant is relatively small; when the number of the conference participants is small, the display size of the image information of each conference participant is relatively large. Similarly, the display interface of the teleconference device corresponding to the conference participants can also display information related to the conference and image acquisition information of the speaker, so that the speaker can exchange with each conference participant through the teleconference device in the process of carrying out the teleconference effectively, and the quality and the effect of conference transmission are improved.
When the teleconference is in a discussion mode, the persons allowing sound production in the teleconference can comprise a speaker and/or other participants, and the number of the persons allowing sound production can be two or more, so as to realize discussion operation. In specific implementation, the information related to the conference and the image acquisition information of each participant can be displayed in the display interface corresponding to the speaker, specifically, the speaker and the participant needing to be discussed can be displayed in the upper half part of the interface, and the information related to the conference is displayed in the lower half part of the interface; or, the information related to the conference is displayed on the upper half part of the interface, and the speaker and the conference participants for discussion are displayed on the lower half part of the interface, so that the discussion objects can be highlighted, and the discussion operation can be more efficient.
Also, in displaying information related to a conference, in order to improve the quality and efficiency of the discussion, among the displayed information related to the conference, at least part of the content of the discussion can be displayed as a mark based on the eye movement information, for example: highlight shows, aggravates demonstration, icon display etc. can make the content that speaker and each meeting personnel are being discussed in the directness more, fix a position fast like this to when the realization needs to discuss at the in-process that carries out the teleconference, can in time, carry out high-efficient discussion effectively, thereby promote the quality and the effect of conference.
When the teleconference is in a conversation mode, the persons allowing the sound production of the teleconference can comprise the speaker and/or other participants, and the number of the persons allowing the sound production is two or more, so as to realize discussion operation. During concrete implementation, the image acquisition information of each participant who needs to have a conversation can be displayed in the display interface that the speaker corresponds to, and data relevant with the meeting need not be displayed, and at this moment, the speaker can carry out eye-to-eye communication with each participant through the teleconferencing device. Similarly, the image acquisition information of the speaker can be displayed in the display interface corresponding to the conference participants who participate in the conversation, the image acquisition information of each conference participant and the speaker can be displayed in the display interface corresponding to the conference participants who do not participate in the conversation, and the image acquisition information of the conference participants and the speaker who perform the conversation operation can be marked and displayed, so that the efficient conversation can be performed timely and effectively when the conversation operation is needed in the process of performing the remote conference, and the quality and the effect of the conference are improved.
It should be noted that the operation mode of the teleconference is not limited to the above-mentioned modes, and those skilled in the art may perform any configuration according to the specific application scenario or application requirement, for example: the operation mode of the teleconference may include a free mode, the display interface of the teleconference may freely switch and display the content related to the teleconference or the image acquisition information of the speakers, and different speakers may have different image acquisition information and the like, which is not described herein again.
When a teleconference is performed, because different operation modes of the teleconference have different conference effects, in order to ensure efficient performance of the teleconference, the operation mode of the teleconference may be determined first, in some examples, the operation mode of the teleconference may be determined based on mode selection operation of a speaker, specifically, before the teleconference is performed, a display interface of a teleconference device corresponding to the speaker may display operation mode controls that can be implemented by the teleconference, for example: a speech mode control, a conversation mode control, a discussion mode control and the like, then acquiring selection operations input by the user on the operation mode controls, and determining the current operation mode of the teleconference based on the selection operations, for example; when the selection operation input by the user for the speech mode control is obtained, the current operation mode of the teleconference can be determined to be the speech mode.
In other examples, the operating mode of the teleconference may be automatically determined based on a voice configuration operation of the teleconference by the speaker or a default voice configuration operation of the teleconference, in which case the operating mode of the teleconference is determined, including: acquiring sound configuration information corresponding to each participant participating in the teleconference; based on the sound configuration information, an operational mode of the teleconference is determined.
Specifically, since the operation mode of the teleconference may be related to the sound configuration information, in order to accurately determine the operation mode of the teleconference, before the teleconference is performed, the sound configuration information corresponding to each participant (including the speaker and other participants) may be obtained, and the sound configuration information may be generated based on the sound configuration operation of the speaker, for example: when the speaker limits the number of the persons capable of sounding in the teleconference to 1, 2 or more (more than 2), corresponding sound configuration information can be obtained; alternatively, the sound configuration information may be generated based on default configuration operations, such as: voice configuration information that allows the speaker to speak, disallows other people from speaking in a default configuration teleconference, voice configuration information that allows all people to speak in a default configuration teleconference, and so on.
After the sound configuration information corresponding to each participant is acquired, the sound configuration information corresponding to each participant can be analyzed to determine the operation mode of the teleconference. In some examples, a mapping relationship between the sound configuration information corresponding to each participant and the operation mode of the teleconference is pre-configured, and the operation mode of the teleconference can be determined based on the mapping relationship and the sound configuration information. For example, when the obtained sound configuration information is used to identify that the number of persons capable of sounding in a preset time period is 1, it may be determined that the operation mode of the teleconference is a speech mode; when the acquired sound configuration information is used for identifying that the number of the persons capable of sounding in a preset time period is 2, determining that the running mode of the teleconference is a conversation mode; when the acquired sound configuration information is used for identifying that the number of persons capable of sounding in a preset time period is more than 2, the running mode of the teleconference can be determined to be the discussion mode.
In other examples, because a display file often exists in a teleconference and the display requirements of different operation modes for the display file are different, the operation mode of the teleconference may be related to not only the sound configuration information but also different display requirements of the display file, and in order to accurately determine the operation mode of the teleconference, the determining the operation mode of the teleconference based on the sound configuration information in this embodiment may include: detecting whether a display file exists in the teleconference; and determining the running mode of the teleconference based on the detection result of the display file and the sound configuration information.
Before a teleconference is carried out, if a display file needing to be displayed exists in a speaker, the display file can be uploaded to a teleconference device, or the display file is obtained by accessing a preset area so that the display file can be displayed through a display interface; if the speaker does not have the display file to be displayed, the display file does not need to be uploaded to the remote conference device. At this time, in order to accurately determine the operation mode of the teleconference, whether a display file exists in the teleconference may be detected, and then the detection result of the display file and the sound configuration information may be analyzed to determine the operation mode of the teleconference.
Wherein, determining the operation mode of the teleconference based on the detection result of the display file and the sound configuration information may include: counting the number of personnel capable of performing voice interaction operation within a preset time period based on the sound configuration information; when the detection result is that the display file exists and the number of the personnel is one, determining that the operation mode is a speech mode; when the detection result is that the display file exists and the number of the personnel is at least two, determining that the operation mode is a discussion mode; and when the detection result is that no display file exists and the number of the persons is at least two, determining that the operation mode is a conversation mode.
Specifically, because the sound configuration information corresponding to each participant can identify whether each participant can perform sound production or voice interaction operation, after the sound configuration information corresponding to each participant is acquired, the number of the participants who can perform voice interaction operation in the preset time period is counted based on the sound configuration information, and then the detection result of the file and the counted number of the participants who can perform voice interaction operation in the preset time period can be comprehensively displayed to determine the operation mode of the teleconference.
For example, when the detection result is that there is a display file and the number of people is one, since the number of people allowed to perform voice interaction operation is only 1, at this time, the presenter often explains the conference content, and thus the operation mode may be determined to be the speech mode. When the detection result is that the display file exists and the number of the persons is at least two, the number of the persons allowed to perform the voice interaction operation is at least two, and at this time, communication between the speaker and other participants can be performed, so that the operation mode can be determined to be the discussion mode. When the detection result shows that the display files do not exist and the number of the personnel is at least two, the display files do not exist in the display interface of the teleconference, so that the scene that the speaker and other personnel directly communicate can be determined, and the operation mode can be determined to be the conversation mode.
Step S302: based on the eye movement information and the operation mode, interaction information corresponding to the teleconference is generated.
Since the same eye movement information may generate different interaction information when the teleconference is in different operation modes, after the operation mode of the teleconference is determined, the eye movement information and the operation mode may be analyzed to generate interaction information corresponding to the teleconference.
In some instances, generating interaction information corresponding to the teleconference based on the eye movement information and the operating mode may include: and when the operation mode is the conversation mode, generating conversation interaction information corresponding to the participants in the teleconference based on the eye movement information, wherein the conversation interaction information is used for determining target participants about to have a conversation with the speaker in the teleconference.
Specifically, when the operation mode is the conversation mode, because the file often does not show in the interface of teleconference at this moment, only need show each image acquisition information who participates in the personnel promptly, as shown in fig. 4, when the person of talkbacking is 1 person, the person of participating in is 8 people, in the display interface of the person of talkbacking, then can evenly show 8 image acquisition information who participates in the personnel. At this time, after the eye movement information of the speaker is acquired, since the eye movement information at this time is mainly used for determining the target conference participant who is about to have a conversation with the speaker in the teleconference, the conversation interaction information for realizing the conversation operation between the speaker and the conference participant can be generated based on the eye movement information.
For example, when a speaker views an interface of a teleconference, eye movement information of the speaker can be obtained in real time, when the eye movement information stays at a preset position for more than a preset time, a target participant can be determined by the participant corresponding to the preset position, and then conversation interaction information corresponding to the target participant can be generated, wherein the conversation interaction information is used for determining the target participant.
In other examples, in order to ensure that the teleconference can be smoothly performed in the conversation mode, the teleconference device may further include a voice recognition device, the voice recognition device acquires voice information of the speaker, and conversation interaction information for implementing conversation operation between the speaker and participants participating in the teleconference is generated based on the voice information and the eye movement information, so that the accuracy of generating the conversation interaction information may be further improved.
For example 1, referring to fig. 4, in a process of performing a teleconference, eye movement information and voice information of a speaker may be obtained in real time, and when it can be determined that a time that the speaker stays on an upper portion of a page exceeds a preset time based on the eye movement information of the speaker and when "please introduce himself to the same school" based on the voice information of the speaker, a target participant performing a conversation operation may be determined based on the eye movement information and the voice information, so that conversation interaction information for determining the target participant may be generated, and then conversation prompt information may be generated based on the conversation interaction information, where the conversation prompt information is only related to the target participant, and the conversation prompt information is not received by a teleconference device corresponding to other participants.
When the interactive information is dialog interactive information, performing an interactive operation corresponding to the teleconference based on the interactive information may include: determining a target participant to be conversed with a speaker in the teleconference based on the conversation interaction information; and generating conversation prompt information with the target participant so that the target participant has a conversation with the speaker based on the conversation prompt information.
After the session interaction information is acquired, a target participant who is about to have a session with the presenter in the teleconference can be determined based on the session interaction information, and in order to enable the target participant to timely know the intention of the presenter to have a session with the target participant, session prompt information corresponding to the target participant can be generated, where the session prompt information may include at least one of the following: the method comprises the steps of highlighting information of a head photo frame, generating pop-up window prompt information for conversation operation with a speaker and the like, wherein the conversation prompt information can be sent to a remote conference device of a target participant, so that the conversation prompt information can be displayed in the remote conference device of the target participant, and the target participant can carry out conversation operation with the speaker based on the conversation prompt information.
For example 2, as shown in fig. 5, after the target participant acquires the conversation prompt information, the target participant may know that the speaker wants to have a conversation with the target participant based on the conversation prompt information, and at this time, in order to further improve more natural eye contact between the speaker and each participant, the speaker and the target participant may be highlighted on the display interface of the speaker and the display interfaces of other participants, that is, the speaker and the target participant are displayed in an enlarged manner in the middle of the interface, and other participants are displayed in a reduced manner.
When the target dialogue personnel and the speaker complete the dialogue operation, the highlight mode between the target personnel and the speaker can be quitted, so that the display interface of the speaker is restored to averagely display the head portrait of each conference participant, and the next dialogue operation is convenient to carry out.
In the implementation mode, the teleconference is configured in the conversation mode, then the eye movement information of the speaker is detected, whether the speaker pays attention to a certain participant or not can be effectively determined based on the eye movement information, if the speaker pays attention to a certain participant, the display interface of the concerned participant is controlled to receive interactive feedback, so that two parties needing to carry out conversation finish eye contact, and the conversation relation in the conference is naturally established, so that the problem that a communication object is confused due to the fact that the remote video lacks direct contact between people in the prior art can be effectively solved, and especially in strange scenes (such as a new meeting and an inability to call for his name), the running stability and reliability of the teleconference method are further improved.
It should be noted that, because the operation mode of the teleconference may include a speech mode or a discussion mode besides a conversation mode, and different operation modes may correspond to different generation manners of the interactive information, another implementation manner of generating the interactive information corresponding to the teleconference based on the eye movement information and the operation mode in this embodiment may include: and when the operation mode is a speech mode or a discussion mode, generating annotation interactive information corresponding to a display file in the teleconference based on the eye movement information.
When the teleconference is in the speech mode, because the speech content of the speaker changes, in order to enable participants to know the speech progress in time, when the operation mode is the speech mode or the discussion mode, the eye movement information corresponding to the speaker and/or each participant can be obtained, and then the annotation interactive information corresponding to the display file in the teleconference can be generated based on the eye movement information, it needs to be noted that the annotation interactive information corresponding to the eye movement information of the speaker and the annotation interactive information corresponding to the eye movement information of each participant can be different, and specifically, the annotation interactive information corresponding to the speaker is mainly used for annotating the speech content of the display file; the labeling interactive information corresponding to each conference participant is mainly used for labeling specific content in the display file so as to indicate that the conference participant is interested in or has a question about the specific content, and the like.
Receiving the content, and when the interactive information is labeled interactive information, performing an interactive operation corresponding to the teleconference based on the interactive information may include: and performing annotation display on the display file based on the annotation interactive information. Specifically, the displaying the display file with the label based on the label interaction information may include: determining the conference identity of the conferee corresponding to the eye movement information; when the conference identity is the speaker, dynamically labeling and/or dynamically displaying data related to the conference progress in the display file based on the labeling interaction information; and when the conference identity is a listener, marking the interested data in the display file based on the marking interactive information.
For each conference participant participating in the teleconference, each conference participant may have a corresponding conference identity, and the conference identity of the conference participant corresponding to the eye movement information may be determined based on the conference identity, for example: when the participants are the speaker, the identity of the speaker can be displayed in a display interface of the teleconference; when the conference participants are listening and speaking people, the identity identification of the listening and speaking people can be displayed in the display interface of the remote conference. After confirming the conference identity of the conference participant corresponding to the eye movement information, corresponding interactive operation can be performed based on the respective corresponding conference identity and the labeling interactive information of each conference participant, specifically, when the conference identity is a speaker, dynamic labeling and/or dynamic display are performed on data related to the conference progress in the display file based on the labeling interactive information, wherein the dynamic labeling performed on the data related to the conference progress can include highlighting, magnifying, displaying in a specific color, and the like, and the dynamic display performed on the data related to the conference progress can include: operations such as up-down sliding display, enlargement display, reduction display, and the like; and when the conference identity is a listener, marking the interested data in the display file based on the marking interactive information.
For example 3, referring to fig. 6, when the remote conference is in a lecture mode, the image acquisition information of each participant and the display file related to the conference can be displayed in the display interface corresponding to the speaker, and when the speaker speaks the first line of data of the display file, the first line of data can be highlighted; when the speaker gives a speech to the third row of data of the display file, the third row of data can be highlighted, so that the data in the display file can be dynamically marked based on eye movement information (sight line position) of the speaker when the relevant file needs to be demonstrated through a teleconference, and then meeting personnel can know the progress of the teleconference in real time and keep high concentration.
In addition, the speaker can also perform interactive operation with the data of the display file by using the eye movement information, specifically, when the eye movement information of the speaker moves upwards, the speaker can perform automatic upward sliding operation on the display file, and when the eye movement information of the speaker moves downwards, the speaker can perform automatic downward sliding operation on the display file; when the preset position exceeds the preset time length for paying attention to the eye movement information of the speaker, the data of the preset position in the display file can be amplified and displayed, and the like, so that the speaker can automatically and naturally carry out interactive editing operation with the display file through the eye movement information, and the frequent and busy operation condition when the display file is adjusted by using a mouse or a keyboard is avoided.
Similarly, when the participant is a listener, the eye movement information of the listener can be obtained, and when the eye movement information is used for identifying that the duration of attention of the listener to a certain data in the display file is longer than the preset duration, the annotation interactive information can be generated based on the eye movement information of the listener, and then the interested data (namely the above-mentioned data to be attended) in the display file can be annotated based on the annotation interactive information.
It should be noted that, in order to be able to perform dynamic labeling and/or dynamic display on data related to the conference progress in the display file based on the labeling interaction information accurately, in addition to acquiring the eye movement information of the speaker, voice information of the speaker may also be acquired, the labeling interaction information is generated by combining the voice information and the eye movement information, and then, data related to the voice information in the display file may be dynamically labeled and/or dynamically displayed based on the generated labeling interaction information.
For example 4, referring to fig. 7, when the teleconference is in the discussion mode, because the number of the conference participants capable of performing the voice interaction operation in the teleconference is two or more, and because the display interface of the teleconference also needs to display the display file, in order to improve the quality and efficiency of discussion, corresponding annotation interaction information may be generated based on eye movement information of each conference participant, and then, the annotation display operation may be performed on preset data in the display file based on the annotation interaction information, and at this time, the number of the annotation display data in the display file may be multiple.
In some examples, in order to distinguish that the display data corresponding to different participants are different, when the display data is labeled, image information corresponding to each participant can be attached to serve as an identity, so that each participant participating in the conference can visually know the display data concerned by each participant and can perform interactive display based on the display data labeled, and thus, the rapid and reliable discussion operation based on specific data in a teleconference can be effectively realized.
It should be noted that, in order to accurately mark and display the preset data in the display file based on the labeled interactive information, in addition to obtaining the eye movement information of the listener, the voice information of the listener may also be obtained, the labeled interactive information is generated by combining the voice information and the eye movement information, and then the specific data related to the voice information in the display file may be labeled and displayed based on the generated labeled interactive information.
In the implementation mode, rapid and accurate discussion operation can be carried out among all the conference participants in the teleconference, moreover, the participants participating in the discussion operation can carry out targeted discussion on certain position data in the display file through eye information, and other conference participants can intuitively position to the position of the file being discussed, so that the interpretation cost for describing the position by language or the execution cost for manually marking the position is effectively omitted.
In the embodiment, the running mode of the teleconference is determined, and then the interactive information corresponding to the teleconference is generated based on the eye movement information and the running mode, so that the quality and the efficiency of generating the interactive information are effectively ensured, and the conference opening quality and the conference opening effect of the teleconference are further improved.
Fig. 8 is a schematic flowchart of another remote conference method according to an embodiment of the present invention; on the basis of the foregoing embodiment, referring to fig. 8, after labeling the data of interest in the display file based on the labeling interaction information when the conference identity of the conference participant is a listener, the method in this embodiment may further include:
step S801: and counting the labeling interactive information of all the listeners to obtain labeling statistical information.
Because the conference transmission quality and effect of the teleconference are closely related to the states of the listeners and speakers, in order to improve the conference transmission quality and effect of the teleconference, after the interesting data in the display file is labeled based on the labeling interactive information, the labeling interactive information of all the listeners and speakers can be counted, so that the participants can obtain the labeling statistical information, as shown in fig. 9, the labeling statistical information can include at least one of the following: the number of the marked interactive information corresponding to the display file, the display data position corresponding to the marked interactive information, the identity of the conference participant corresponding to the marked interactive information, and the like.
Step S802: and determining the conference host state of the speaker based on the labeling statistical information.
Wherein the conference hosting state of the presenter may include: the conference hosting state of the speaker can be determined by analyzing the annotation statistical information after the annotation statistical information is acquired. In some instances, determining the conference hosting status of the presenter based on the annotation statistics may include: and acquiring a machine learning model for analyzing the conference host state, wherein the machine learning model is trained to determine the conference host state of the speaker based on the labeling statistical information, and after the labeling statistical information is acquired, the labeling statistical information can be input into the machine learning model, so that the conference host state output by the machine learning model can be obtained.
In other examples, determining the conference hosting status of the presenter based on the annotation statistics may include: when the marked statistical information is less than the preset number, determining that the conference host state corresponding to the speaker does not meet the expected state; and when the marked statistical information is larger than or equal to the preset number, determining that the conference host state corresponding to the speaker meets the expected state.
Specifically, a preset number for analyzing and processing the annotation statistical information (mainly, the number of the corresponding annotation interactive information in the display file) is preset, after the annotation statistical information is obtained, the annotation statistical information can be analyzed and compared with the preset number, when the annotation statistical information is smaller than the preset number, it is indicated that in the process of the presenter hosting the teleconference, the attention points of the presenter on the conference content are few, at this time, it can be laterally proved that the quality and effect of the teleconference hosted by the presenter are poor, and further, it can be determined that the conference hosting state corresponding to the presenter does not meet the expected state. When the marked statistical information is larger than or equal to the preset number, it is shown that in the process of the speaker hosting the teleconference, the attention points of the speaker to the conference contents are more, at the moment, the quality and the effect of the teleconference hosted by the speaker can be laterally proved to be better, and then the conference hosting state corresponding to the speaker can be determined to meet the expected state.
In addition, after the annotation statistical information is acquired, the annotation statistical information can be displayed, so that the attention heat of participants to the main speaking content or specific problems can be known through the number and concentration degree of the head portrait tags in the screen, and a fierce discussion atmosphere can be created, thereby being beneficial to improving the quality and effect of the teleconference.
Step S803: and when the conference host state does not meet the expected state, generating adjustment prompt information corresponding to the speaker.
Specifically, when it is determined that the conference hosting state does not satisfy the expected state, in order to improve the quality and effect of the teleconference, adjustment prompt information corresponding to the speaker may be generated, where the adjustment prompt information is used to prompt the speaker to adjust the hosting mode and the speaking style of the teleconference, so as to improve the conference quality and effect of the teleconference.
It should be noted that the method in this embodiment may further include: when the conference host state meets the expected state, the host can keep the host state of the current remote conference state without generating the adjustment prompt information corresponding to the host.
In the embodiment, the annotation statistical information is obtained by counting the annotation interactive information of all the listeners, the conference hosting state of the speaker is determined based on the annotation statistical information, and the adjustment prompt information corresponding to the speaker is generated when the conference hosting state does not meet the expected state, so that the speaker can be effectively reminded to adjust the speaker style and the speaker mode in time when the hosting state of the teleconference is poor, the quality and the effect of the teleconference are improved, and the practicability of the teleconference method is further improved.
Fig. 10 is a schematic flowchart of another remote conference method according to an embodiment of the present invention; on the basis of the foregoing embodiment, referring to fig. 10, after acquiring the labeling statistical information, where the labeling statistical information is greater than or equal to a preset number, the method in this embodiment may further include:
step S1001: and determining the attention hotspot information corresponding to all the listeners based on the labeling statistical information.
After the labeling interaction information of all the listeners is counted to obtain the labeling statistical information, analyzing the labeling statistical information to determine the attention hotspot information corresponding to all the listeners, in some examples, determining the attention hotspot information corresponding to all the listeners based on the labeling statistical information may include: determining the quantity of the corresponding label interactive information in the display file and the position of the display data corresponding to the label interactive information based on the label statistical information; when the number of the marked interactive information in the preset area is greater than or equal to the preset number, the data in the preset area can be determined as the concerned hotspot information corresponding to most of the listeners, so that the accuracy and reliability of determining the concerned hotspot information are effectively ensured.
Step S1002: and generating interactive prompt information based on the attention hot spot information, wherein the interactive prompt information is used for prompting a user whether to generate an exchange group corresponding to the attention hot spot information based on the labeling statistical information.
After the attention hotspot information is acquired, the attention hotspot information can be analyzed to generate interaction prompt information, and the interaction prompt information is used for prompting a user whether to generate an exchange group corresponding to the attention hotspot information based on the labeling statistical information. The interactive prompt information may be pop-up prompt information, and the pop-up prompt information may include a "confirmation control" for prompting the user to generate an exchange group corresponding to the attention hot spot information based on the annotation statistical information, and a "cancellation control" for prompting the user not to generate the exchange group corresponding to the attention hot spot information based on the annotation statistical information.
Step S1003: and responding to the confirmation information of the interaction prompt information, and generating an exchange group corresponding to the attention hotspot information.
When the user inputs and executes the operation aiming at the 'confirmation control' in the interaction prompt information, the confirmation information of the user on the interaction prompt information is obtained, then a communication group corresponding to the attention hot spot information can be generated based on the confirmation information, and the communication group can include all the conference participant information corresponding to the attention hot spot information. For example, referring to fig. 11, when three pieces of labeled interaction information correspond to a certain position in a display file in a teleconference, it is described that data corresponding to the position of the display file is hot spot attention information, after confirmation information of a user on interaction prompt information is obtained, an exchange group corresponding to the hot spot attention information may be generated based on the confirmation information, where the exchange group may include a participant and a speaker corresponding to the hot spot attention information, and in the exchange group, the participant and the speaker may perform a quick and effective interaction operation on the hot spot attention information.
In addition, the method in this embodiment may further include: when the user inputs and executes the operation aiming at the 'cancel control' in the interaction prompt information, the cancel information of the user to the interaction prompt information is obtained, and the communication group corresponding to the attention hot spot information does not need to be generated at the moment, so that whether the communication group needs to be established or not can be flexibly determined based on the user requirement.
In the embodiment, the concerned hotspot information corresponding to all the listeners is determined based on the labeling statistical information, then the interactive prompt information is generated based on the concerned hotspot information, and the communication group corresponding to the concerned hotspot information is generated in response to the confirmation information of the interactive prompt information, so that the communication group can be generated based on the concerned hotspot information after the concerned hotspot information is determined, and thus, the concerned hotspot information can be quickly and effectively subjected to concentrated and targeted communication and communication operation, and the transmission quality and efficiency of the teleconference can be improved.
Fig. 12 is a schematic flowchart of a remote conference method according to an embodiment of the present invention; on the basis of any one of the above embodiments, referring to fig. 12, the teleconference method in this embodiment can implement not only the interactive operation in each operation mode, but also the switching operation between each operation mode, and at this time, generating the interactive information corresponding to the teleconference based on the eye movement information and the operation mode in this embodiment may include:
Step S1201: and when the operation mode is the first mode, acquiring voice information.
Wherein, the first mode can be any one of the following modes: speech mode, conversation mode, discussion mode; when the operation mode is the first mode, the voice information can be acquired, and the voice information at this time can be the voice information sent by the speaker or the voice information sent by the participants. When the voice information is the voice information sent by the speaker, the voice information can be ' adjust the running mode of the teleconference to the second mode ', ' please speak a person's own opinion in a classmate ' and the like; when the voice information is voice information sent by conference participants, the voice information may be "for a certain location, i think that.
Step S1202: mode switching information corresponding to the teleconference is generated based on the voice information and the eye movement information.
After the voice information and the eye movement information are acquired, the voice information and the eye movement information may be analyzed to generate mode switching information corresponding to the teleconference, where the mode switching information is used to switch the teleconference from a first mode to a second mode, where the second mode may be another operation mode different from the first mode.
Step S1203: the teleconference is switched from the first mode to the second mode based on the mode switching information.
After the mode switching information is acquired, the mode switching information can be analyzed and processed to switch the teleconference from the first mode to the second mode based on the mode switching information, so that flexible switching operation of the running mode of the teleconference is effectively achieved.
For example 1, the operation mode of the teleconference can be switched by the control of the speaker, and when the current operation mode of the teleconference is the speech mode, if the voice information of the speaker is acquired, for example: the voice information is "switching the operation mode of the teleconference to the conversation mode or the discussion mode" or "please speak the own opinion of a certain student", mode switching information corresponding to the teleconference can be generated based on the voice information and the eye movement information, and then the teleconference can be switched from the speech mode to the conversation mode or the discussion mode based on the mode switching information.
For example 2, the operation mode of the teleconference can be automatically switched through the voice information of the participants, when the current operation mode of the teleconference is a speech mode, the voice information of a plurality of participants is acquired within a preset time period, and when the voice information of two participants is acquired simultaneously, the current operation mode of the teleconference can be switched from the speech mode to a conversation mode; when the voice information of a plurality of participated eyes is obtained simultaneously, the current operation mode of the teleconference can be switched from the speech mode to the discussion mode, so that the teleconference can be automatically and effectively switched between different operation modes.
In this embodiment, when the operation mode of the teleconference is the first mode, by acquiring the voice information, then generating the mode switching information corresponding to the teleconference based on the voice information and the eye movement information, and switching the teleconference from the first mode to the second mode based on the mode switching information, the automatic or active mode switching operation of the operation mode of the teleconference is effectively realized, and the flexible reliability of the use of the teleconference method is further improved.
In specific application, the embodiment of the application provides a teleconference method, the method can realize an information interaction method of a teleconference based on an eye tracking technology, and can enable a participant to perform more natural man-machine interaction operation in a plurality of different teleconference modes, and during specific implementation, an execution main body for realizing the teleconference method can be a head-mounted display device realized based on an AR/VR technology and can also be applied to a transmission client.
Taking the remote conference with a speech mode, a discussion mode and a conversation mode as an example, wherein the speech mode is used for implementing dynamic guidance of sight line identification, the discussion mode is used for implementing multi-person identification focused discussion hotspot, and the conversation mode is used for implementing gaze evoking conversation relationship, as shown in fig. 13, the remote conference method may include the following steps:
Step 1: a plurality of remote conference rooms form a remote conference room (which can be a virtual conference room), and then each participant can enter the virtual conference room.
And 2, step: and identifying the identities of the participants and numbering the participants.
And step 3: through analysis of sound or environmental factors, scene patterns of the conference are identified, for example: speech mode, dialog mode, or discussion mode.
And 4, step 4: by means of eye movement recognition technology, the sight line position of one or more 'speakers' (namely eye movement information) is positioned in real time.
And 5: the position of the sight line is marked with a role-based interaction identifier on the video conference interface, or a certain menu function (such as calling a communication object) is triggered.
The technical scheme provided by the application embodiment not only enables the speaker to carry out natural interactive operation with the participants based on the eye movement information, but also enables the rhythm of the teleconference to be more natural, flexible and reliable, so that the quality and effect of conference information transmission are effectively guaranteed, and the practicability of the method is further improved.
Fig. 14 is a schematic flowchart of a data interaction method according to an embodiment of the present invention; referring to fig. 14, the embodiment provides a data interaction method, where an execution subject of the method may be a data interaction device, and the data interaction device may be implemented as software, or a combination of software and hardware, and in particular, when the data interaction device is hardware, it may be embodied as various electronic devices having a display screen and an eye tracking device, including but not limited to a smart phone, a tablet computer, a personal computer PC, a conference room display device, a head-mounted display device implemented based on XR technology, and the like. When the data interaction means is implemented as software, it can be installed in the electronic devices exemplified above. The data interaction method in this embodiment may include the following steps:
Step S1401: and acquiring the data to be processed displayed in the display interface.
Wherein, the data to be processed may include at least one of: the data to be processed can be uploaded to the data interaction device by a user, so that the data interaction device can acquire the data to be processed which needs to be displayed; or, the data to be processed may be stored in a preset area, and the data to be processed that needs to be displayed may be acquired by accessing the preset area.
After the to-be-processed data are acquired, the to-be-processed data can be displayed in the display interface, so that all people who can view the display interface can perform data interaction operation on the basis of the displayed to-be-processed data.
Step S1402: and determining eye movement information which is respectively viewed by a plurality of persons on the data to be processed.
After the data to be processed is acquired and displayed, eye movement information that a plurality of people respectively view the data to be processed may be determined, where a specific acquisition manner of the eye movement information is similar to the implementation manner of step S201 in the foregoing embodiment, and is not described herein again.
Step S1403: and generating interactive information corresponding to the data to be processed based on the eye movement information.
Wherein, based on the eye movement information, generating the interaction information corresponding to the data to be processed may include: determining interesting data corresponding to each person in the data to be processed based on the eye movement information; and generating label interactive information corresponding to the interested data corresponding to each person, wherein the label interactive information is used for carrying out label display on the interested data.
Specifically, the implementation manner of the steps in this embodiment is similar to the implementation manner of the steps S202 and S302 in the embodiment, and the above statements may be specifically referred to, and are not repeated herein.
Step S1404: and performing label display on the data to be processed based on the interactive information.
Specifically, the implementation manner of the steps in this embodiment is similar to the implementation manner of the step S203 in the embodiment, and the above statements may be specifically referred to, and are not repeated herein.
For example 1, when data to be processed is design data or research and development data that needs to be analyzed and processed, after the design data or the research and development data is acquired, the design data or the research and development data may be displayed through a display interface, where the display interface may be a display interface shared by a plurality of data interaction devices, that is, the design data or the research and development data may be displayed in display interfaces corresponding to the plurality of data interaction devices.
When a user views design data or research and development data through a display interface of the data interaction device, eye movement information corresponding to the design data or the research and development data viewed by each person can be acquired through an eye movement tracking device on the data interaction device, and the eye movement information may include: the movement range (or movement distance) of the eyeball, the movement path, the movement direction (up, down, left, right, etc.), the movement type (blinking, closed eye, etc.), the number of movements, the stay time in a preset area, the eye gaze angle information, and the like.
After the eye movement information of each person is acquired, the eye movement information can be analyzed and processed, so that interested data corresponding to each person can be determined, the interested data can be design data or partial data in research and development data, it should be noted that the number of the interested data corresponding to one person can be one or more, and then interactive information corresponding to the interested data in the data to be processed can be generated. For example: when the design data is vehicle design data, the data of interest corresponding to the user a may be head part data, the data of interest corresponding to the user b may be window part data, and the data of interest corresponding to the user c may be lamp part data, etc.
After the interactive information is generated, the data to be processed may be labeled and displayed based on the interactive information, which is mainly to label the interested data corresponding to each user, so that all the personnel viewing the design data or the research and development data can know the interested parts corresponding to other users, and then communication discussion based on the labeled interested parts is facilitated, for example: the user A considers that the parameters of the car head part score data are not appropriate and needs to be modified appropriately; the user B considers that the size of the vehicle window part data needs to be adjusted, the user C considers that the outline in the vehicle lamp part data needs to be adjusted, and the like, so that the quality and the efficiency of communication and discussion of design data or research and development data are effectively improved, and the practicability of the data interaction method is further improved.
For example 2, when the data to be processed is image data (for example, an advertisement map, a propaganda map, or the like) or consumption tendency survey data (electronic data corresponding to an article to be traded corresponding to a certain platform, for example, an established model of an article to be traded corresponding to a certain supermarket), after the image data or the consumption tendency survey data is obtained, the image data or the consumption tendency survey data may be displayed through a display interface, where the display interface may be a display interface shared by a plurality of data interaction devices, that is, the image data or the consumption tendency survey data may be displayed in display interfaces corresponding to the plurality of data interaction devices.
When a user views image data or consumption tendency survey data through a display interface of the data interaction device, eye movement information corresponding to the person viewing design data or the research and development data can be acquired through an eye movement tracking device on the data interaction device, and the eye movement information may include: the movement range (or movement distance) of the eyeball, the movement path, the movement direction (up, down, left, right, etc.), the movement type (blinking, closing the eye, etc.), the number of movements, the stay time in a preset area, the eye gaze angle information, and the like.
After the eye movement information of each person is obtained, the eye movement information may be analyzed, so that an interested part corresponding to each person may be determined, where the interested part may be image data or partial data in consumption tendency survey data, it should be noted that the number of the interested data corresponding to one person may be one or more, and then interactive information corresponding to the interested part may be generated. For example: when the image data is an advertisement image, it may be determined that the interested part corresponding to the user a may be an upper half of the image based on the eye movement information of the user a, it may be determined that the interested part corresponding to the user b may be an upper half of the image based on the eye movement information of the user b, and it may be determined that the interested part corresponding to the user c may be a middle part of the image based on the eye movement information of the user c. Alternatively, when the consumption tendency survey data is a product model image corresponding to a certain trading platform, it may be determined that the purchase tendency of the item a is P1, the purchase tendency of the item b is P2, and the purchase tendency of the item c is P3, where P1< P2< P3, based on the eye movement information of the plurality of users.
After the interactive information is generated, the data to be processed may be labeled and displayed based on the interactive information, and the labeling is mainly performed for the interested part corresponding to each user, so that all the people viewing the image data can know the interested parts corresponding to other users, and then communication discussion is performed based on the labeled interested parts, for example: when the user A and the user B are interested in the upper half part of the image and the user C is interested in the middle part of the image, the lower half part of the image data is weak in attraction, the lower half part of the image data can be adjusted, and the like, so that the quality and efficiency of communication and discussion of the image data are effectively improved, and the practicability of the data interaction method is further improved.
In addition, after the interested part in the consumption tendency survey data is labeled for each user, the purchasing tendency corresponding to each transaction item of each user can be intuitively known, for example: after the number of users interested in a certain article and the purchasing tendency are obtained, the inventory and the platform transaction number of each article can be adjusted based on the purchasing tendency corresponding to different articles, so that the timely and flexible adjustment operation of the inventory and the platform transaction number of the transaction article can be effectively realized based on the consumption tendency survey data, the purchasing demand of the users can be met, the transaction rate of the commodities can be improved, and the practicability of the method is further improved.
It should be noted that the method in this embodiment may also execute the method in the embodiment shown in fig. 3 to fig. 13, and for the part of this embodiment that is not described in detail, reference may be made to the relevant description of the embodiment shown in fig. 3 to fig. 13. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 3 to fig. 13, which are not described herein again.
In the data interaction method in the embodiment, the to-be-processed data displayed in the display interface is acquired, the eye movement information which is used for a plurality of persons to check the to-be-processed data respectively is determined, then the interaction information corresponding to the to-be-processed data is generated based on the eye movement information, and the to-be-processed data is marked and displayed based on the interaction information, so that the interaction operation on the to-be-processed data according to the respective eye movement information of the persons is effectively realized, the rhythm of processing the to-be-processed data is more direct and reliable, the quality and the effect of the data interaction method are effectively guaranteed, and the practicability of the method is further improved.
FIG. 15 is a flowchart illustrating another data interaction method according to an embodiment of the present invention; referring to fig. 15, the embodiment provides another data interaction method, where an execution main body of the method may be a data interaction device, the data interaction device may be implemented as software, or a combination of software and hardware, and specifically, when the data interaction device is hardware, it may specifically be various electronic devices having a display screen and an eye tracking device, and the electronic devices may mainly include a head-mounted display device based on Augmented Reality (AR), a head-mounted display device based on Virtual Reality (VR), Mixed Reality (MR), or Hybrid Reality (HR), or video Reality (CR), and so on. When the data interaction means is implemented as software, it can be installed in the electronic devices exemplified above. The data interaction method in this embodiment may include the following steps:
Step S1501: and acquiring a preset virtual space generated by a virtual reality technology.
Extended Reality (XR) technology refers to a real and virtual combined human-machine interactive environment created by computer technology and wearable devices. XR can include Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), and video Reality (CR), in other words, XR is a general term, and specifically includes AR, VR, MR, and CR. In brief, XR can be divided into multiple layers and can go from a limited sensor input virtual world to a fully immersive virtual world.
In order to implement data interaction, when a user needs to perform data interaction, a preset virtual space may be generated by using a virtual reality technology and related data, so as to obtain a preset virtual space generated by using the virtual reality technology.
Step S1502: and determining eye movement information corresponding to each person participating in the preset virtual space.
In order to implement data interaction operation, the eye movement information corresponding to each person participating in the preset virtual space may be determined, and a specific manner of acquiring the eye movement information corresponding to each person participating in the preset virtual space in this embodiment is similar to the implementation manner of step S201 in the above embodiment, which may specifically refer to the above statements, and is not described again here.
Step S1503: and generating interactive information corresponding to the preset virtual space based on the eye movement information.
Step S1504: and performing interactive operation corresponding to the preset virtual space based on the interactive information.
The specific implementation process and implementation effect of steps S1503 to S1504 in this embodiment are similar to the specific implementation process and implementation effect of steps S202 to S203 in the foregoing embodiment, and specific reference may be made to the above statements, which are not repeated herein.
For example, when the preset virtual space is a game space implemented by using a VR technology, the data interaction device at this time may be implemented as a head-mounted display device implemented by using the VR technology, and the user may acquire eye movement information of the user by using an eye movement tracking device included in the head-mounted display device, and then may generate interaction information in the game space based on the eye movement information, where the interaction information may include: scene information of a switching game space, information of a response operation to an interface displayed in the game space, information of an interactive operation to a virtual item in the game space, and the like. After the interactive information of the user is acquired, the interactive operation corresponding to the preset virtual space can be executed based on the interactive information, so that the effective interactive operation in the preset virtual space through the eye movement information is realized, and the practicability of the method is further improved.
According to the data interaction method provided by the embodiment, the preset virtual space generated by using the virtual reality technology is obtained, the eye movement information corresponding to each person participating in the preset virtual space is determined, then the interaction information corresponding to the preset virtual space is generated based on the eye movement information, and the interaction operation corresponding to the preset virtual space is executed based on the interaction information, so that the interaction operation can be effectively realized according to the respective eye movement information of the person in the preset virtual space, and the interaction rhythm of the preset virtual space can be more flexible and reliable, so that the quality and the effect of the data interaction method are effectively ensured, and the practicability of the method is further improved.
Fig. 16 is a schematic diagram of another teleconference method according to an embodiment of the present invention; referring to fig. 16, the embodiment provides another remote conference method, where an execution subject of the method may be a remote conference apparatus, the remote conference apparatus may be implemented as software, or a combination of software and hardware, and specifically, when the remote conference apparatus is implemented as hardware, it may be specifically various electronic devices having a display screen and an eye tracking apparatus, including but not limited to a smart phone, a tablet computer, a personal computer PC, a conference room display device, a head-mounted display device implemented by Augmented Reality (AR), Virtual Reality (VR) or Mixed Reality (MR) technology, and so on. When the teleconference device is implemented as software, it can be installed in the electronic apparatus exemplified above. Based on the above remote conference device, the remote conference method in this embodiment may include the following steps:
Step S1601: and acquiring the conference permission information of each conference participating terminal in the teleconference.
Wherein, can include a plurality of different meeting terminals in the teleconference, to the different meeting terminals in the conference, different meeting terminals can correspond different meeting authority information, and foretell meeting authority information can directly influence the display data at meeting terminal, consequently, in order to guarantee the fail safe nature of data display, when carrying out a teleconference, can acquire all meeting terminals that include in the teleconference, then confirm the meeting authority information at each meeting terminal, this meeting authority information is used for the data display authority that sign meeting terminal corresponds.
Specifically, this embodiment does not limit the specific implementation manner of obtaining the conference permission information of each participating terminal in the teleconference, and a person skilled in the art can set the conference permission information according to a specific application scenario or an application requirement, in some examples, a mapping relationship between a user identity and the conference permission information is pre-configured, when a user accesses the teleconference through the participating terminal, a user identity corresponding to each participating terminal can be obtained, specifically, before the participating terminal enters the teleconference, an identity authentication operation can be performed on the user corresponding to the participating terminal, for example: and after the user identity is obtained, the conference permission information corresponding to each participating terminal in the remote conference can be determined based on the user identity and the mapping relation.
For example, the conference participating terminal corresponding to the speaker may connect to a plurality of other conference participating terminals through a preset network, where the other conference participating terminals include: a conference participating terminal a, a conference participating terminal b, and a conference participating terminal c are described as examples, where the conference participating terminal a corresponds to the user a, the conference participating terminal b corresponds to the user b, and the conference participating terminal c corresponds to the user c. Before a plurality of other conferencing terminals access the teleconference hosted by the conferencing terminals through a preset network, identity recognition operation can be carried out on each conferencing terminal, pop-up window information used for verifying the authority of each conferencing person can be displayed in the display interface of each conferencing terminal, and the information such as the identity information of each conferencing person and the conference access password of the teleconference can be acquired through the pop-up window information.
After the identity information of each participant is obtained, the conference permission information corresponding to each participant terminal may be determined based on the identity information of each participant, for example: the method comprises the steps that based on identity information of a user A, conference permission information of a participating terminal a can be determined to be conference permission 1, and the permission of the conference permission 1 is higher; the conference permission information of the participating terminal b can be determined to be conference permission 2 based on the identity information of the user B, and the permission of the conference permission 2 is lower than that of the conference permission 1; and determining that the conference permission information of the participating terminal c is the conference permission 3 based on the identity information of the user C, wherein the permission of the conference permission 3 is lower than that of the conference permission 2.
Step S1602: and determining conference data which can be displayed by each conference participating terminal based on the conference permission information, wherein the conference data comprises at least one part of a display file which needs to be displayed in the remote conference.
Because different conference permission information can correspond to different data display permissions, in order to enable related data of corresponding permissions to be displayed in each participating terminal, after the conference permission information of each participating terminal in a teleconference is acquired, the conference permission information can be analyzed to determine conference data that each participating terminal can display, wherein the conference data comprises at least one part of display files needing to be displayed in the teleconference, and the conference data can comprise at least one of the following: two-dimensional display data, three-dimensional display data, and the like.
In some examples, determining conference data that can be displayed by each of the participating terminals based on the conference permission information may include: acquiring file display permission corresponding to each part of a display file to be displayed in a teleconference; determining the display authority of the target file corresponding to the conference authority information; in the display file, all data which are less than or equal to the display permission of the target file are determined as the conference data which can be displayed by each conference participating terminal, and the conference data at the moment only comprise data corresponding to the conference permission information of each conference participating terminal and do not comprise other data, so that the conference data which can be displayed by the conference participating terminals can be effectively determined based on different conference permission information, and the accuracy and reliability of the determination of the conference data are ensured.
In other examples, determining conference data that can be displayed by each of the participating terminals based on the conference permission information may include: in the display file, determining the permission internal data and the permission external data corresponding to each participating terminal based on the conference permission information; processing the data outside the authority to obtain processed data which cannot be checked; and acquiring conference data which can be displayed by each conference participating terminal based on the processed data and the data in the authority.
Specifically, in order to accurately obtain conference data that can be displayed by each participating terminal, a display file that needs to be displayed in a teleconference can be obtained first, and for the data in the display file, different data can correspond to different conference permission information, so after the conference permission information is obtained, intra-permission data and extra-permission data corresponding to each participating terminal can be determined based on the conference permission information, where the intra-permission data may be data that is less than or equal to the data corresponding to the conference permission information, and the extra-permission data may be data that is greater than the data corresponding to the conference permission information. It can be understood that, for the same display file, different conference permission information may correspond to different intra-permission data and different extra-permission data, and in general, the higher the priority of the conference permission information (for example, the conference permission information is conference permission 1), the more the corresponding intra-permission data and the less the corresponding extra-permission data are, and the lower the priority of the conference permission information is (for example, the conference permission information is conference permission 3), the less the corresponding intra-permission data and the more the corresponding extra-permission data are.
After the in-right data and the out-right data are obtained, the out-right data cannot be displayed on a display interface of the corresponding conference terminal, so that in order to enable conference participating terminals with different conference right information to display the data of the corresponding right without displaying any out-right data, the out-right data can be processed after the out-right data is obtained, specifically, the out-right data can be subjected to operations such as coding, encryption, deletion and the like, and the processed data which cannot be viewed can be obtained. After the processed data is obtained, the processed data and the data in the authority can be spliced or combined, so that conference data which can be displayed by each conference participating terminal can be obtained.
For example, when a speaker needs to perform a project reporting operation, since the project includes a plurality of departments and different departments are respectively responsible for managing different parts of the project, different departments may be configured with different meeting authority information in order to avoid mutual influence of schedules and information between different departments.
When the speaker has a teleconference with the responsible person of each department, the conference permission information corresponding to the participating terminal corresponding to the responsible person can be determined based on the identity information of the responsible person of each department, for example; the conference terminal corresponding to the person in charge of the first department is the conference terminal a, the conference terminal corresponding to the person in charge of the second department is the conference terminal b, the conference terminal corresponding to the person in charge of the third department is the conference terminal c, the conference permission information corresponding to the conference terminal a can be determined to be the conference permission 1, the conference permission information of the conference terminal b is the conference permission 2, and the conference permission information of the conference terminal c is the conference permission 3 through analysis.
Because the authority of the conference authority 3 is lower than the authority of the conference authority 2, the authority of the conference authority 2 is lower than the authority of the conference authority 1, then the data which can be displayed by the participating terminal a can be determined to be the whole display file based on the authority of the conference authority 1, the data which can be displayed by the participating terminal b can be determined to be the partial display file based on the authority of the conference authority 2, the data which can be displayed by the participating terminal c can be determined to be the display file with less parts based on the authority of the conference authority 3, and therefore the conference data which can be displayed by each participating terminal can be accurately determined based on the conference authority information is effectively achieved.
Step S1603: and displaying the corresponding conference data by using the conference-participating terminal.
After the conference data which can be displayed by each conference participating terminal is acquired, the conference participating terminals can be used for displaying the corresponding conference data, so that the users with different authorities can see the conference data matched with the user authorities through the conference participating terminals, the risk of leakage of the conference data can be effectively avoided, and the safety and reliability of the conference data display are ensured.
In some examples, when conference data that needs to be displayed by a conferencing terminal is composed of intra-authority data and processed data, and when the conference data is displayed, in order to accurately distinguish the intra-authority data from the processed data, identification information that cannot be displayed may be added to a portion corresponding to the processed data, where the identification information may be a "lock icon", "XX identification", and the like, so that the user can intuitively know that other data that does not match with the authority of the user exists in a currently displayed display document through the displayed identification information.
It should be noted that the method in this embodiment may also include the method in the embodiment shown in fig. 1 to fig. 13, and for the part of this embodiment that is not described in detail, reference may be made to the relevant description of the embodiment shown in fig. 1 to fig. 13. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 1 to fig. 13, which are not described herein again.
According to the teleconference method provided by the embodiment, the conference permission information of each participating terminal in the teleconference is obtained, then the conference data which can be displayed by each participating terminal is determined based on the conference permission information, and the corresponding conference data is displayed by the participating terminals, so that the conference data which can only be matched with the conference permission information can be displayed by each participating terminal, the security and the confidentiality of data display are ensured, the risk of data leakage is avoided, and the security and the reliability of the teleconference are further improved.
Fig. 17 is a schematic structural diagram of a remote conference apparatus according to an embodiment of the present invention; referring to fig. 17, the present embodiment provides a remote conference apparatus, which is configured to perform the remote conference method shown in fig. 2, and specifically, the remote conference apparatus may include:
A first obtaining module 11, configured to obtain eye movement information of participants participating in a teleconference;
a first generating module 12, configured to generate interaction information corresponding to the teleconference based on the eye movement information;
and the first processing module 13 is configured to perform an interactive operation corresponding to the teleconference based on the interactive information to assist the teleconference.
In some examples, when the first generation module 12 generates the interaction information corresponding to the teleconference based on the eye movement information, the first generation module 12 is configured to perform: determining an operation mode of the teleconference; and generating interactive information corresponding to the teleconference based on the eye movement information and the operation mode.
In some examples, when the first generation module 12 determines the operation mode of the teleconference, the first generation module 12 is configured to perform: acquiring sound configuration information corresponding to each participant participating in the teleconference; based on the sound configuration information, an operational mode of the teleconference is determined.
In some examples, when the first generation module 12 determines the operation mode of the teleconference based on the sound configuration information, the first generation module 12 is configured to perform: detecting whether a display file exists in the teleconference; and determining the running mode of the teleconference based on the detection result of the display file and the sound configuration information.
In some examples, when the first generation module 12 determines the operation mode of the teleconference based on the detection result of the display file and the sound configuration information, the first generation module 12 is configured to perform: counting the number of personnel capable of performing voice interaction operation within a preset time period based on the sound configuration information; when the detection result is that the display file exists and the number of the personnel is one, determining that the operation mode is a speech mode; when the detection result is that the display file exists and the number of the personnel is at least two, determining that the operation mode is a discussion mode; and when the detection result is that no display file exists and the number of the persons is at least two, determining that the operation mode is a conversation mode.
In some examples, when the first generation module 12 generates the interaction information corresponding to the teleconference based on the eye movement information and the operation mode, the first generation module 12 is configured to perform: when the operation mode is a conversation mode, conversation interaction information corresponding to the participants in the teleconference is generated based on the eye movement information, and the conversation interaction information is used for determining the target participants about to have a conversation with the speaker in the teleconference;
at this time, when the first processing module 13 performs an interactive operation corresponding to the teleconference based on the interactive information, the first processing module 13 is configured to perform: determining a target participant to be conversed with a speaker in the teleconference based on the conversation interactive information; and generating conversation prompt information with the target participant so that the target participant has a conversation with the speaker based on the conversation prompt information.
In some examples, when the first generation module 12 generates the interaction information corresponding to the teleconference based on the eye movement information and the operation mode, the first generation module 12 is configured to perform: when the operation mode is a speech mode or a discussion mode, generating annotation interactive information corresponding to a display file in the teleconference based on the eye movement information;
when the first processing module 13 performs an interactive operation corresponding to the teleconference based on the interactive information, the first processing module 13 is configured to perform: and performing annotation display on the display file based on the annotation interactive information.
In some examples, when the first processing module 13 performs annotation display on the display file based on the annotation interactive information, the first processing module 13 is configured to perform: determining the conference identity of the conferee corresponding to the eye movement information; when the conference identity is the speaker, dynamically labeling and/or dynamically displaying data related to the conference progress in the display file based on the labeling interaction information; and when the conference identity is the person listening to the speaker, labeling the interested data in the display file based on the labeling interaction information.
In some examples, after labeling the data of interest in the display file based on the labeling interaction information, the first obtaining module 11 and the first processing module 13 in this embodiment are configured to perform the following steps:
The first obtaining module 11 is configured to count the labeling interaction information of all listeners and obtain labeling statistical information;
the first processing module 13 is configured to determine a conference host state of the speaker based on the annotation statistical information; and when the conference host state does not meet the expected state, generating adjustment prompt information corresponding to the speaker.
In some examples, when the first processing module 13 determines the conference hosting status of the speaker based on the annotation statistical information, the first processing module 13 is configured to perform: when the marked statistical information is less than the preset number, determining that the conference host state corresponding to the speaker does not meet the expected state; and when the marked statistical information is larger than or equal to the preset number, determining that the conference host state corresponding to the speaker meets the expected state.
In some examples, after the labeling statistical information is greater than or equal to the preset number, the first processing module 13 in this embodiment is configured to perform: determining the attention hotspot information corresponding to all listeners based on the labeling statistical information; generating interactive prompt information based on the concerned hotspot information, wherein the interactive prompt information is used for prompting a user whether to generate an exchange group corresponding to the concerned hotspot information based on the labeling statistical information; and responding to the confirmation information of the interaction prompt information, and generating an exchange group corresponding to the attention hotspot information.
In some examples, when the first generation module 12 generates the interaction information corresponding to the teleconference based on the eye movement information and the operation mode, the first generation module 12 is configured to perform: when the operation mode is the first mode, acquiring voice information; generating mode switching information corresponding to the teleconference based on the voice information and the eye movement information; the teleconference is switched from the first mode to the second mode based on the mode switching information.
The apparatus shown in fig. 17 can perform the method of the embodiment shown in fig. 1-13, and the detailed description of this embodiment can refer to the related description of the embodiment shown in fig. 1-13. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 1 to 13, and are not described herein again.
In one possible design, the structure of the teleconference device shown in fig. 17 may be implemented as an electronic device, which may be a smartphone, a tablet computer, a personal computer PC, a conference room display device, or other various devices. As shown in fig. 18, the electronic device may include: a first processor 21 and a first memory 22. Wherein the first memory 22 is used for storing programs for the corresponding electronic device to execute the teleconferencing method in the embodiments shown in fig. 1-13 described above, and the first processor 21 is configured for executing the programs stored in the first memory 22.
The program comprises one or more computer instructions which, when executed by the first processor 21, are capable of performing the steps of: acquiring eye movement information of participants participating in a teleconference; generating interactive information corresponding to the teleconference based on the eye movement information; and performing an interactive operation corresponding to the teleconference based on the interactive information to assist the teleconference.
Further, the first processor 21 is also used to execute all or part of the steps in the embodiments shown in fig. 1-13.
The electronic device may further include a first communication interface 23 for communicating with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the teleconferencing method in the method embodiments shown in fig. 1 to 13.
Furthermore, an embodiment of the present invention provides a computer program product, including: computer program, which, when executed by a processor of an electronic device, causes the processor to carry out the steps of the teleconferencing method as described above in connection with fig. 1-13.
Fig. 19 is a schematic structural diagram of a data interaction apparatus according to an embodiment of the present invention; referring to fig. 19, the present embodiment provides a data interaction apparatus, where the data interaction apparatus is configured to execute the data interaction method shown in fig. 14, and specifically, the data interaction apparatus may include:
a second obtaining module 31, configured to obtain to-be-processed data displayed in a display interface;
a second determining module 32, configured to determine eye movement information for a plurality of persons to view the data to be processed respectively;
a second generating module 33, configured to generate interaction information corresponding to the data to be processed based on the eye movement information;
and the second processing module 34 is configured to perform label display on the data to be processed based on the interaction information.
In some examples, when the second generating module 33 generates the interaction information corresponding to the data to be processed based on the eye movement information, the second generating module 33 is configured to perform: determining interesting data corresponding to each person in the data to be processed based on the eye movement information; and generating label interactive information corresponding to the interested data corresponding to each person, wherein the label interactive information is used for carrying out label display on the interested data.
The apparatus shown in fig. 19 can perform the method of the embodiment shown in fig. 14, and reference may be made to the related description of the embodiment shown in fig. 14 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 14, and are not described herein again.
In one possible design, the structure of the data interaction apparatus shown in fig. 19 may be implemented as an electronic device, which may be a smart phone, a tablet computer, a personal computer PC, a conference room display device, a head-mounted display device implemented based on XR technology, or other devices. As shown in fig. 20, the electronic device may include: a second processor 41 and a second memory 42. Wherein, the second memory 42 is used for storing the program of the corresponding electronic device for executing the data interaction method in the embodiment shown in fig. 14, and the second processor 41 is configured for executing the program stored in the second memory 42.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the second processor 41, are capable of performing the steps of: acquiring to-be-processed data displayed in a display interface; determining eye movement information which is respectively checked by a plurality of persons on the data to be processed; generating interactive information corresponding to the data to be processed based on the eye movement information; and performing label display on the data to be processed based on the interactive information.
Further, the second processor 41 is also used to execute all or part of the steps in the embodiment shown in fig. 14.
The electronic device may further include a second communication interface 43 for communicating with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the data interaction method in the method embodiment shown in fig. 14.
Furthermore, an embodiment of the present invention provides a computer program product, including: the computer program, when executed by a processor of the electronic device, causes the processor to perform the steps of the data interaction method shown in fig. 14.
FIG. 21 is a schematic structural diagram of another data interaction device according to an embodiment of the present invention; referring to fig. 21, the present embodiment provides another data interaction apparatus, where the data interaction apparatus is configured to perform the data interaction method shown in fig. 15, and specifically, the data interaction apparatus may include:
a third obtaining module 51, configured to obtain a preset virtual space generated by a virtual reality technology;
A third determining module 52, configured to determine eye movement information corresponding to each person participating in the preset virtual space;
a third generating module 53, configured to generate interaction information corresponding to the preset virtual space based on the eye movement information;
and a third processing module 54, configured to perform an interactive operation corresponding to the preset virtual space based on the interaction information.
The apparatus shown in fig. 21 can execute the method of the embodiment shown in fig. 15, and reference may be made to the related description of the embodiment shown in fig. 15 for a part of this embodiment that is not described in detail. The implementation process and technical effect of this technical solution are described in the embodiment shown in fig. 15, and are not described herein again.
In one possible design, the structure of the data interaction apparatus shown in fig. 21 may be implemented as an electronic device, which may be a wearable device, and the wearable device may be: the electronic device may include a third processor 61 and a third memory 62, as shown in fig. 22, wherein the third memory 62 is used for storing a program for the corresponding electronic device to perform the data interaction method in the embodiment shown in fig. 22, and the third processor 61 is configured to execute the program stored in the third memory 62.
The program comprises one or more computer instructions which, when executed by the third processor 61, are capable of performing the steps of: acquiring a preset virtual space generated by a virtual reality technology; determining eye movement information corresponding to each person participating in a preset virtual space; generating interactive information corresponding to a preset virtual space based on the eye movement information; and performing interactive operation corresponding to the preset virtual space based on the interactive information.
Further, the third processor 61 is also used for executing all or part of the steps in the embodiment shown in fig. 15.
The electronic device may further include a third communication interface 63 for communicating with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the data interaction method in the method embodiment shown in fig. 15.
Furthermore, an embodiment of the present invention provides a computer program product, including: the computer program, when executed by a processor of the electronic device, causes the processor to perform the steps of the data interaction method shown in fig. 15.
Fig. 23 is a schematic structural diagram of another remote conference apparatus according to an embodiment of the present invention; referring to fig. 23, the present embodiment provides still another remote conference apparatus, which is configured to execute the remote conference method shown in fig. 16, and specifically, the remote conference apparatus may include:
a fourth obtaining module 71, configured to obtain conference permission information of each participating terminal in the teleconference;
a fourth determining module 72, configured to determine, based on the conference permission information, conference data that can be displayed by each participating terminal, where the conference data includes at least a part of a display file that needs to be displayed in the teleconference;
and a fourth processing module 73, configured to display the corresponding conference data by using the conference-participating terminal.
In some examples, when the fourth determination module 72 determines conference data that can be displayed by each of the participating terminals based on the conference permission information, the fourth determination module 72 may be configured to perform: in the display file, determining the permission internal data and the permission external data corresponding to each participating terminal based on the conference permission information; processing the data outside the authority to obtain processed data which cannot be checked; and acquiring the conference data which can be displayed by each conference participating terminal based on the processed data and the data in the authority.
The apparatus shown in fig. 23 can execute the method of the embodiment shown in fig. 16, and reference may be made to the related description of the embodiment shown in fig. 16 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution are described in the embodiment shown in fig. 16, and are not described herein again.
In one possible design, the structure of the teleconference device shown in fig. 23 may be implemented as an electronic device, which may be a smartphone, a tablet computer, a personal computer PC, a conference room display device, or other various devices. As shown in fig. 24, the electronic device may include: a fourth processor 81 and a fourth memory 82. Wherein the fourth memory 82 is used for storing programs for the corresponding electronic device to execute the teleconferencing method in the embodiment shown in fig. 16 described above, and the fourth processor 81 is configured for executing the programs stored in the fourth memory 82.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the fourth processor 81, enable the following steps to be performed: acquiring conference permission information of each conference participating terminal in a teleconference; determining conference data which can be displayed by each conference participating terminal based on the conference permission information, wherein the conference data comprises at least one part of a display file which needs to be displayed in the remote conference; and displaying the corresponding conference data by using the conference-participating terminal.
Further, the fourth processor 81 is also configured to perform all or part of the steps in the embodiment shown in fig. 16.
The electronic device structure may further include a fourth communication interface 83, which is used for the electronic device to communicate with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the teleconference method in the method embodiment shown in fig. 16.
Furthermore, an embodiment of the present invention provides a computer program product, including: the computer program, when executed by a processor of an electronic device, causes the processor to perform the steps of the teleconferencing method described above in fig. 16.
The above-described embodiments of the apparatus are merely illustrative, and units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (18)

1. A teleconferencing method, comprising:
acquiring eye movement information of participants participating in a teleconference;
generating interaction information corresponding to the teleconference based on the eye movement information;
and performing an interactive operation corresponding to the teleconference based on the interactive information to assist the teleconference.
2. The method of claim 1, wherein generating interaction information corresponding to the teleconference based on the eye movement information comprises:
determining an operation mode of the teleconference;
and generating interactive information corresponding to the remote conference based on the eye movement information and the running mode.
3. The method of claim 2, wherein determining the operational mode of the teleconference comprises:
acquiring sound configuration information corresponding to each participant participating in the teleconference;
determining an operating mode of the teleconference based on the sound configuration information.
4. The method of claim 3, wherein determining the operational mode of the teleconference based on the sound configuration information comprises:
detecting whether a display file exists in the remote conference or not;
And determining the running mode of the teleconference based on the detection result of the display file and the sound configuration information.
5. The method of claim 4, wherein determining the operation mode of the teleconference based on the detection result of the display file and the sound configuration information comprises:
counting the number of persons capable of performing voice interaction operation within a preset time period based on the sound configuration information;
when the detection result is that a display file exists and the number of the personnel is one, determining that the operation mode is a speech mode;
when the detection result is that the display file exists and the number of the personnel is at least two, determining that the operation mode is a discussion mode;
and when the detection result indicates that no display file exists and the number of the persons is at least two, determining that the operation mode is a conversation mode.
6. The method of claim 2, wherein generating interaction information corresponding to the teleconference based on the eye movement information and the operating mode comprises:
when the operation mode is a conversation mode, generating conversation interaction information corresponding to the participants in the teleconference based on the eye movement information, wherein the conversation interaction information is used for determining target participants about to have a conversation with a speaker in the teleconference;
Performing an interactive operation corresponding to the teleconference based on the interaction information, including:
determining a target participant to be conversed with a speaker in the teleconference based on the conversation interaction information;
and generating conversation prompt information with the target participant so that the target participant has a conversation with a speaker based on the conversation prompt information.
7. The method of claim 2, wherein generating interaction information corresponding to the teleconference based on the eye movement information and the operating mode comprises:
when the operation mode is a speech mode or a discussion mode, generating annotation interactive information corresponding to a display file in the remote conference based on the eye movement information;
performing an interactive operation corresponding to the teleconference based on the interaction information, including:
and performing label display on the display file based on the label interaction information.
8. The method of claim 7, wherein the displaying the display file based on the annotation interaction information comprises:
determining the conference identity of the conference participants corresponding to the eye movement information;
When the conference identity is a speaker, dynamically labeling and/or dynamically displaying data related to the conference progress in the display file based on the labeling interaction information;
and when the conference identity is a person listening to or speaking, marking the interested data in the display file based on the marking interaction information.
9. The method of claim 8, wherein after annotating the data of interest in the display file based on the annotation interaction information, the method further comprises:
counting the labeling interactive information of all listeners to obtain labeling statistical information;
determining a conference hosting state of the speaker based on the labeling statistical information;
and when the conference host state does not meet the expected state, generating adjustment prompt information corresponding to the speaker.
10. The method of claim 9, wherein determining a conference hosting status of the speaker based on the annotation statistics comprises:
when the marking statistical information is smaller than the preset number, determining that the conference host state corresponding to the speaker does not meet the expected state;
And when the marking statistical information is larger than or equal to the preset number, determining that the conference host state corresponding to the speaker meets the expected state.
11. The method of claim 10, wherein after the labeling statistic is greater than or equal to a preset number, the method further comprises:
based on the labeling statistical information, determining attention hotspot information corresponding to all listeners;
generating interaction prompt information based on the concerned hotspot information, wherein the interaction prompt information is used for prompting a user whether to generate an exchange group corresponding to the concerned hotspot information based on the labeling statistical information;
and responding to the confirmation information of the interaction prompt information, and generating an exchange group corresponding to the attention hotspot information.
12. The method of any of claims 2-11, wherein generating interaction information corresponding to the teleconference based on the eye movement information and the operating mode comprises:
when the operation mode is a first mode, acquiring voice information;
generating mode switching information corresponding to the teleconference based on the voice information and the eye movement information;
Switching the teleconference from a first mode to a second mode based on the mode switching information.
13. A data interaction method, comprising:
acquiring data to be processed displayed in a display interface;
determining eye movement information of a plurality of persons for respectively checking the data to be processed;
generating interactive information corresponding to the data to be processed based on the eye movement information;
and performing label display on the data to be processed based on the interaction information.
14. The method of claim 13, wherein generating interaction information corresponding to the data to be processed based on the eye movement information comprises:
in the data to be processed, determining interesting data corresponding to each person based on the eye movement information;
and generating label interaction information corresponding to the interested data corresponding to each person, wherein the label interaction information is used for performing label display on the interested data.
15. A method for data interaction, comprising:
acquiring a preset virtual space generated by a virtual reality technology;
determining eye movement information corresponding to each person participating in a preset virtual space;
Generating interactive information corresponding to the preset virtual space based on the eye movement information;
and executing interactive operation corresponding to the preset virtual space based on the interactive information.
16. A teleconferencing method, comprising:
acquiring conference permission information of each participating terminal in a teleconference;
determining conference data which can be displayed by each conference participating terminal based on the conference permission information, wherein the conference data comprises at least one part of a display file which needs to be displayed in a remote conference;
and displaying the corresponding conference data by using the conference-participating terminal.
17. The method of claim 16, wherein determining conference data that can be displayed by each of the participating terminals based on the conference permission information comprises:
in the display file, determining the in-authority data and the out-authority data corresponding to each participating terminal based on the conference authority information;
processing the data outside the authority to obtain processed data which cannot be checked;
and acquiring conference data which can be displayed by each conference participating terminal based on the processed data and the data in the authority.
18. An electronic device, comprising: a memory, a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the teleconferencing method of any one of claims 1-10.
CN202210238594.4A 2022-03-11 2022-03-11 Teleconference method, data interaction method, device, and computer storage medium Pending CN114679437A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210238594.4A CN114679437A (en) 2022-03-11 2022-03-11 Teleconference method, data interaction method, device, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210238594.4A CN114679437A (en) 2022-03-11 2022-03-11 Teleconference method, data interaction method, device, and computer storage medium

Publications (1)

Publication Number Publication Date
CN114679437A true CN114679437A (en) 2022-06-28

Family

ID=82072051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210238594.4A Pending CN114679437A (en) 2022-03-11 2022-03-11 Teleconference method, data interaction method, device, and computer storage medium

Country Status (1)

Country Link
CN (1) CN114679437A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114826804A (en) * 2022-06-30 2022-07-29 天津大学 Method and system for monitoring teleconference quality based on machine learning
CN115334053A (en) * 2022-08-03 2022-11-11 深圳乐播科技有限公司 Method for realizing associated screen projection in cloud conference and related product
CN115396404A (en) * 2022-08-08 2022-11-25 深圳乐播科技有限公司 Synchronous screen projection method and related device for speaker explanation position in cloud conference scene

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102209080A (en) * 2010-03-30 2011-10-05 刘盛举 Terminal system for synchronous teaching or conferences and control method thereof
US20120224021A1 (en) * 2011-03-02 2012-09-06 Lee Begeja System and method for notification of events of interest during a video conference
US20140184550A1 (en) * 2011-09-07 2014-07-03 Tandemlaunch Technologies Inc. System and Method for Using Eye Gaze Information to Enhance Interactions
WO2018136063A1 (en) * 2017-01-19 2018-07-26 Hewlett-Packard Development Company, L.P. Eye gaze angle feedback in a remote meeting
US20190147367A1 (en) * 2017-11-13 2019-05-16 International Business Machines Corporation Detecting interaction during meetings
CN109934150A (en) * 2019-03-07 2019-06-25 百度在线网络技术(北京)有限公司 A kind of meeting participation recognition methods, device, server and storage medium
CN112153323A (en) * 2020-09-27 2020-12-29 北京百度网讯科技有限公司 Simultaneous interpretation method and device for teleconference, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102209080A (en) * 2010-03-30 2011-10-05 刘盛举 Terminal system for synchronous teaching or conferences and control method thereof
US20120224021A1 (en) * 2011-03-02 2012-09-06 Lee Begeja System and method for notification of events of interest during a video conference
US20140184550A1 (en) * 2011-09-07 2014-07-03 Tandemlaunch Technologies Inc. System and Method for Using Eye Gaze Information to Enhance Interactions
WO2018136063A1 (en) * 2017-01-19 2018-07-26 Hewlett-Packard Development Company, L.P. Eye gaze angle feedback in a remote meeting
CN110268370A (en) * 2017-01-19 2019-09-20 惠普发展公司,有限责任合伙企业 Eye gaze angle feedback in teleconference
US20190147367A1 (en) * 2017-11-13 2019-05-16 International Business Machines Corporation Detecting interaction during meetings
CN109934150A (en) * 2019-03-07 2019-06-25 百度在线网络技术(北京)有限公司 A kind of meeting participation recognition methods, device, server and storage medium
CN112153323A (en) * 2020-09-27 2020-12-29 北京百度网讯科技有限公司 Simultaneous interpretation method and device for teleconference, electronic equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114826804A (en) * 2022-06-30 2022-07-29 天津大学 Method and system for monitoring teleconference quality based on machine learning
CN114826804B (en) * 2022-06-30 2022-09-16 天津大学 Method and system for monitoring teleconference quality based on machine learning
CN115334053A (en) * 2022-08-03 2022-11-11 深圳乐播科技有限公司 Method for realizing associated screen projection in cloud conference and related product
CN115334053B (en) * 2022-08-03 2023-07-18 深圳乐播科技有限公司 Method for realizing associated screen projection in cloud conference and related products
CN115396404A (en) * 2022-08-08 2022-11-25 深圳乐播科技有限公司 Synchronous screen projection method and related device for speaker explanation position in cloud conference scene
CN115396404B (en) * 2022-08-08 2023-09-05 深圳乐播科技有限公司 Synchronous screen throwing method and related device for explanation positions of main speakers in cloud conference scene

Similar Documents

Publication Publication Date Title
US11688399B2 (en) Computerized intelligent assistant for conferences
CN114679437A (en) Teleconference method, data interaction method, device, and computer storage medium
US10630738B1 (en) Method and system for sharing annotated conferencing content among conference participants
US12001587B2 (en) Data compliance management in recording calls
US9621731B2 (en) Controlling conference calls
US10630734B2 (en) Multiplexed, multimodal conferencing
US9298342B2 (en) Classes of meeting participant interaction
CN108205627A (en) Have ready conditions offer of the interactive assistant module to access
CN113196239A (en) Intelligent management of content related to objects displayed within a communication session
CN113170076A (en) Dynamic curation of sequence events for a communication session
US10699709B2 (en) Conference call analysis and automated information exchange
US20200351265A1 (en) Secure dashboard user interface for multi-endpoint meeting
CN111556279A (en) Monitoring method and communication method of instant session
KR20140078258A (en) Apparatus and method for controlling mobile device by conversation recognition, and apparatus for providing information by conversation recognition during a meeting
JP2020136921A (en) Video call system and computer program
KR102412823B1 (en) System for online meeting with translation
US20220392175A1 (en) Virtual, augmented and extended reality system
US11792468B1 (en) Sign language interpreter view within a communication session
US11558440B1 (en) Simulate live video presentation in a recorded video
Rong et al. “It Feels Like Being Locked in A Cage”: Understanding Blind or Low Vision Streamers’ Perceptions of Content Curation Algorithms
US10560479B2 (en) Communication with component-based privacy
Suduc et al. Status, challenges and trends in videoconferencing platforms
US20150381937A1 (en) Framework for automating multimedia narrative presentations
CN111796900A (en) Display control method and device of electronic conference system and electronic conference system
CN110276681A (en) A kind of method and device commenced business

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40074564

Country of ref document: HK