WO2022215361A1 - 情報処理装置及び情報処理方法 - Google Patents
情報処理装置及び情報処理方法 Download PDFInfo
- Publication number
- WO2022215361A1 WO2022215361A1 PCT/JP2022/006165 JP2022006165W WO2022215361A1 WO 2022215361 A1 WO2022215361 A1 WO 2022215361A1 JP 2022006165 W JP2022006165 W JP 2022006165W WO 2022215361 A1 WO2022215361 A1 WO 2022215361A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- group
- participant
- user
- information
- information processing
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 119
- 238000003672 processing method Methods 0.000 title claims description 6
- 238000012545 processing Methods 0.000 claims description 82
- 230000007704 transition Effects 0.000 claims description 7
- 238000000034 method Methods 0.000 description 63
- 230000006870 function Effects 0.000 description 29
- 230000000981 bystander Effects 0.000 description 25
- 230000008569 process Effects 0.000 description 24
- 238000004891 communication Methods 0.000 description 23
- 238000010586 diagram Methods 0.000 description 18
- 238000007726 management method Methods 0.000 description 16
- 239000013598 vector Substances 0.000 description 12
- 230000009471 action Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 9
- 230000033001 locomotion Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 230000004044 response Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000015654 memory Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 101150044039 PF12 gene Proteins 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 235000019640 taste Nutrition 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005674 electromagnetic induction Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1818—Conference organisation arrangements, e.g. handling schedules, setting up parameters needed by nodes to attend a conference, booking network resources, notifying involved parties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1831—Tracking arrangements for later retrieval, e.g. recording contents, participants activities or behavior, network status
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
Definitions
- the present disclosure relates to an information processing device and an information processing method.
- remote meetings also called “remote meetings”
- remote meetings have been held by multiple users by communicating voice and images over the Internet, such as web conferences.
- voice meetings also called “remote meetings”
- web conferences such as web conferences.
- dividing users also referred to as “participants”
- Non-Patent Document 1 Non-Patent Document 1
- this disclosure proposes an information processing device and an information processing method capable of notifying participants participating in a remote meeting of appropriate information.
- an information processing apparatus divides a plurality of participants participating in a remote meeting into a plurality of groups, enables conversation within each group, an estimating unit for estimating the topic of conversation in the group from the utterances of the participants; the topic of the second group, which is a group other than the first group to which the one participant belongs; and the preferences of the one participant. and a notification unit that notifies the one participant of the second group if the two meet a condition regarding similarity.
- FIG. 3 is a diagram showing an example of information processing according to an embodiment of the present disclosure
- FIG. FIG. 10 is a diagram showing an example of display for onlookers
- FIG. 10 is a diagram showing another example of display for onlookers
- It is a figure which shows an example of a display to a party.
- 1 is a diagram illustrating a configuration example of a remote meeting system according to an embodiment of the present disclosure
- FIG. FIG. 2 is a diagram illustrating a configuration example of a remote meeting server according to an embodiment of the present disclosure
- FIG. 4 is a diagram illustrating an example of a group information storage unit according to an embodiment of the present disclosure
- FIG. 1 is a diagram illustrating a configuration example of a terminal device according to an embodiment of the present disclosure
- 4 is a flow chart showing a processing procedure of the information processing device according to the embodiment of the present disclosure
- 4 is a flow chart showing a processing procedure in the terminal device according to the embodiment of the present disclosure
- 4 is a flow chart showing a processing procedure of a remote meeting server according to an embodiment of the present disclosure
- It is a flow chart which shows a matching processing procedure.
- It is a figure which shows an example of a display mode.
- 1 is a hardware configuration diagram showing an example of a computer that implements functions of an information processing apparatus;
- Embodiment 1-1 Outline of information processing according to embodiment of present disclosure 1-1-1. Display example with notice of recommended group 1-1-2. Display example without notice of recommended group 1-1-3. Background and Effects 1-1-4. Participation state of user 1-1-5. Group to present 1-1-6. Notification form 1-1-7. Group movement, etc. 1-2. Configuration of remote meeting system according to embodiment 1-3. Configuration of Information Processing Apparatus According to Embodiment 1-4. Configuration of terminal device according to embodiment 1-5. Information processing procedure according to embodiment 1-5-1. Procedure of processing related to information processing apparatus 1-5-2. Procedure of processing related to terminal device 1-5-3. Procedure of processing related to remote meeting server 1-5-4. Procedure of matching processing 1-6. Output mode example (party venue) 2. Other Embodiments 2-1. Other configuration examples 2-2. Others 3. Effects of the present disclosure 4 . Hardware configuration
- FIG. 1 is a diagram illustrating an example of information processing according to an embodiment of the present disclosure.
- Information processing according to the embodiment of the present disclosure is implemented by the remote meeting system 1 including the remote meeting server 100 and a plurality of terminal devices 10 .
- each terminal device 10 may be described as a terminal device 10a, a terminal device 10b, and a terminal device 10c in order to distinguish and describe each terminal device 10.
- FIG. Note that the terminal device 10a, the terminal device 10b, the terminal device 10c, and the like are referred to as the "terminal device 10" when they are described without particular distinction.
- three terminal devices 10 are shown in FIG. 1, there may be four or more terminal devices 10 .
- the remote meeting system 1 has a function of dividing users (participants) participating in a remote meeting (hereinafter also referred to as "meeting") into a plurality of groups.
- FIG. 1 shows a case where multiple users participating in a meeting are divided into multiple groups such as groups GP1 to GP4.
- a meeting host (administrator) divides a plurality of users participating in the meeting into a plurality of groups. Note that the above is merely an example, and in the remote meeting system 1, multiple users participating in a meeting may be automatically divided into multiple groups. Also, after division into groups, each user may be able to change the group to which he/she belongs.
- the group to which each user belongs is defined as the user's "first group”
- the groups other than the group to which each user belongs are defined as the user's "second group”.
- the group GP1 is the first group
- the other groups GP2, GP3, GP4, etc. are the second groups.
- the group GP2 is the first group
- the other groups GP1, GP3, GP4, etc. are the second groups.
- the first group and the second group are relative concepts, and the first group for a certain user is the second group for users other than those belonging to the same group as the user.
- group GP1 is the first group for users U1, U2, U3, and U4 belonging to group GP1, and is the second group for users other than users belonging to group GP1.
- moving groups
- participant corresponds to, for example, the first state of actively participating in a group conversation
- the "bystander” corresponds to the state other than the first state. It corresponds to the 2 state, ie the position of not being a party.
- participation states can be set arbitrarily.
- step numbers such as S11-1 shown in FIG. 1 are for explaining each process (codes) and do not indicate the order of the processes. Each process is executed as needed according to the progress of the remote meeting. be.
- FIG. 1 the terminal device 10 used by the user U1 is referred to as the terminal device 10a
- the terminal device 10 used by the user U12 is referred to as the terminal device 10b
- the terminal device 10 used by the user U23 is referred to as the terminal device 10c.
- An application for participating in a remote meeting provided by the remote meeting system 1 (hereinafter also referred to as a "remote meeting application") is installed in each terminal device 10, and each user participates in a remote meeting using the remote meeting application. participate.
- each terminal device 10 estimates the preferences of the user who uses that terminal device 10 .
- Each terminal device 10 estimates the user's preference based on the user's utterance history, interest items registered in advance by the user, and the like, the details of which will be described later.
- the terminal device 10a used by the user U1 estimates that the preference of the user U1 is the preference PF1 (step S10-1).
- abstract information such as "PF1" is shown as the information indicating the user's preference, but the information indicating the user's preference may be a keyword indicating the user's preference (interest) such as "animation” or "Olympic". It may be information.
- the keyword is not limited to a character string, and may be abstractly expressed information (for example, a numerical vector) such as a vector expression.
- steps S10-1 to S10-3 will be collectively referred to as step S10 when described without distinction. Steps S10-1 to S10-3 are not limited to being executed once, but are executed as needed according to the progress of the remote meeting.
- each terminal device 10 transmits and receives information about the remote meeting to and from the remote meeting server 100 at any time.
- Each terminal device 10 transmits information such as utterances and images of users using the terminal device 10 to the remote meeting server 100 and receives information to be output about the remote meeting from the remote meeting server 100 .
- step S11-1 communication (transmission/reception) of information relating to the remote meeting input/output to/from the terminal device 10a is performed (step S11-1).
- the terminal device 10a transmits information such as user U1's speech and images to the remote meeting server 100, and receives information to be output about the remote meeting from the remote meeting server 100.
- FIG. When the terminal device 10a estimates the preference of the user U1, the terminal device 10a transmits information indicating the preference PF1 of the user U1 to the remote meeting server 100.
- step S11-2 communication (transmission/reception) of information relating to the remote meeting input/output to/from the terminal device 10b is performed (step S11-2).
- the terminal device 10b transmits information such as user U12's speech and images to the remote meeting server 100, and receives information to be output about the remote meeting from the remote meeting server 100.
- the terminal device 10b estimates the preference of the user U12
- the terminal device 10b transmits information indicating the preference PF12 of the user U12 to the remote meeting server 100.
- step S11-3 communication (transmission/reception) of information relating to the remote meeting input/output to/from the terminal device 10c is performed (step S11-3).
- the terminal device 10c transmits information such as user U23's speech and images to the remote meeting server 100, and receives information to be output about the remote meeting from the remote meeting server 100.
- the terminal device 10b estimates the preference of the user U23
- the terminal device 10b transmits information indicating the preference PF23 of the user U23 to the remote meeting server 100.
- steps S11-1 to S11-3 will be collectively referred to as step S11 when described without distinction.
- Step S11 is a step of communicating (transmitting and receiving) information on the remote meeting between the remote meeting server 100 and each terminal device 10.
- FIG. Steps S11-1 to S11-3 are not limited to being executed once, but are executed as needed according to the progress of the remote meeting. Moreover, step S11 may be performed before step S10.
- the remote meeting server 100 collects the information received from each terminal device 10 and executes an estimation process for estimating the topic of each group and the participation status of each user (step S12).
- the remote meeting server 100 collects information such as group information LT1 to LT4 and performs estimation processing.
- group GP1 remote meeting server 100 estimates topics of group GP1 based on the content of utterances of users U1, U2, U3, and U4 who belong to group GP1.
- Remote meeting server 100 presumes that the topic of group GP1 is topic GT1 based on the content of the utterances of users U1, U2, U3, and U4 belonging to group GP1.
- FIG. 1 the remote meeting server 100 collects the information received from each terminal device 10 and executes an estimation process for estimating the topic of each group and the participation status of each user.
- the remote meeting server 100 collects information such as group information LT1 to LT4 and performs estimation processing.
- remote meeting server 100 estimates topics of group GP1 based on the content of utterances of
- the remote meeting server 100 estimates the participation status of users U1, U2, U3, and U4 belonging to group GP1.
- the remote meeting server 100 estimates whether the participation status of each of the users U1, U2, U3, and U4 is a party or a bystander.
- the remote meeting server 100 estimates whether the participation state of the user U1 is a participant or a bystander based on the user U1's utterance and participation attitude in the group GP1.
- the remote meeting server 100 presumes that the participation status of user U1 is spectator.
- the remote meeting server 100 presumes that the participation status of user U2 is party, that of user U3 is spectator, and that of user U4 is party. The details of the participation state estimation will be described later.
- the remote meeting server 100 presumes that the topic of group GP2 is topic GT2 based on the content of the utterances of users U76, U11, and U23 belonging to group GP2. Further, the remote meeting server 100 presumes that the participation status of user U76 is spectator, the participation status of user U11 is participant, and the participation status of user U23 is spectator. Similarly, the remote meeting server 100 estimates the topic of the group and the participation status of the users belonging to the group for groups GP3, GP4, and the like.
- the remote meeting server 100 estimates the topic of each group GP1 to GP4 and the participation status of each user, as shown in the group information LT1 to LT4.
- the remote meeting server 100 recommends a second group (hereinafter referred to as "recommended group”) to the user based on the similarity between each user's preference and the topic of a group (second group) other than the first group to which the user belongs. (also referred to as “group”) is determined (step S13).
- the remote meeting server 100 performs matching processing on users whose participation status is a bystander.
- the remote meeting server 100 calculates the degree of similarity between a keyword indicating a user's preference (also referred to as a "participant keyword”) and a keyword indicating a group topic (also referred to as a "group keyword”).
- the remote meeting server 100 vectorizes each keyword and calculates the cosine similarity between vectors of each keyword as the similarity between keywords.
- the remote meeting server 100 converts keywords into vectors using arbitrary models (vector conversion models) such as Word2Vec and BoW (Bag of words).
- the remote meeting server 100 calculates the degree of similarity between the preference PF1 of the bystander user U1 and the topics GT2, GT3, and GT4 of the groups GP2, GP3, and GP4 other than the group GP1 to which the user U1 belongs. For example, the remote meeting server 100 calculates the cosine similarity between the vector of the participant keyword indicated by the preference PF1 of the user U1 and the vector of the group keyword indicated by the topic GT2 of the group GP2 as the similarity between the user U1 and the group GP2. calculate. Similarly, for groups GP3 and GP4, remote meeting server 100 calculates the cosine similarity between the vector of the participant keyword of user U1 and the vector of the group keyword as the similarity to user U1.
- the remote meeting server 100 determines the group with the highest calculated similarity as the recommended group for the user U1. For example, the remote meeting server 100 determines the group GP3 of the topic GT3 having the highest degree of similarity with the preference PF1 of the user U1 as the recommended group for the user U1.
- the remote meeting server 100 also calculates the degree of similarity between the preference PF23 of the bystander user U23 and the topics GT1, GT3, and GT4 of the groups GP1, GP3, and GP4 other than the group GP2 to which the user U23 belongs. Then, the remote meeting server 100 determines the group with the highest calculated similarity as the recommended group for the user U23. For example, the remote meeting server 100 determines the group GP1 of the topic GT1 having the highest degree of similarity with the preference PF23 of the user U23 as the recommended group for the user U23.
- the remote meeting server 100 similarly determines recommended groups for users U3, U76, U543, U53, U44, etc. who are other bystanders. Note that the remote meeting server 100 may or may not determine the recommended group for the users U2, U4, U11, U98, U12, U61, U102, etc. who are the parties. Also, the remote meeting server 100 may or may not notify the user who is in the party status, as will be described later. For example, when the remote meeting server 100 notifies a user who is in the status of a party, as will be described later, the notification is made inconspicuously compared to bystanders (for example, the display is made smaller than the display to the bystanders). or you may notify them in the same way as bystanders.
- the remote meeting server 100 notifies each user of the recommended group determined by the matching process.
- the remote meeting server 100 notifies the bystander user of the recommended group for the user.
- the remote meeting server 100 notifies user U1 of group GP3, which is a recommended group for user U1 (step S14-1). For example, the remote meeting server 100 notifies the user U1 of the group GP3 by transmitting information indicating the group GP3 to the terminal device 10a used by the user U1. For example, the remote meeting server 100 notifies the user U1 of the group GP3 by transmitting information to the terminal device 10a to display the screen IM1 including the information of the group GP3 as shown in FIG. will be detailed later.
- the remote meeting server 100 notifies the user U23 of the group GP1, which is a recommended group for the user U23 (step S14-3). For example, the remote meeting server 100 notifies the user U23 of the group GP1 by transmitting information indicating the group GP1 to the terminal device 10c used by the user U23. For example, the remote meeting server 100 notifies the user U23 of the group GP1 by transmitting information to the terminal device 10c to display a screen containing the information of the group GP1.
- step S14 the remote meeting server 100 may perform matching processing for the user U12 and notify the user U12 of the recommended group (step S14-2).
- steps S14-1 to S14-3 will be collectively referred to as step S11 when described without distinction.
- Steps S14-1 to S14-3 are executed at any time when the conditions are satisfied according to the progress of the remote meeting.
- step S14 may be performed together with step S11.
- the remote meeting system 1 can notify the participants participating in the remote meeting of appropriate information. 2 to 4, examples of display on the terminal device 10 according to the presence or absence of notification of recommended groups will be described.
- FIG. 2 is a diagram showing an example of a display for onlookers.
- FIG. 2 shows an example of a display on the terminal device 10a of the user U1 who is a bystander in FIG.
- a screen IM1 including information on a group GP3, which is a recommended group for the user U1 is displayed. Also, on the screen IM1, information of a group GP1, which is a group (first group) to which the user U1 belongs, is mainly displayed.
- the information of the group GP1 in the screen IM1 includes images of users participating in the group GP1.
- images of users participating in group GP1 are displayed in area AR1
- images of users participating in group GP3 are displayed in area AR3.
- the images of the participants of the group GP1, which is the first group, and the images of the participants of the group GP3, which is the recommended group (second group) are displayed on the terminal device 10.
- the area AR1, which is the first area where the group GP1 is displayed is larger than the area AR3, which is the second area where the group GP3 is displayed. That is, on the terminal device 10, the images of the participants of the group GP1, which is the first group, are displayed larger than the images of the participants of the group GP3, which is the recommended group (second group). In this way, in the terminal device 10, the images of the participants of the group GP3, which is the recommended group (second group), are displayed in the second area smaller than the first area that displays the images of the participants of the group GP1, which is the first group. An image is displayed.
- the remote meeting system 1 identifiably displays the group GP1, which is the first group, and the group GP3, which is the recommended group (second group).
- the remote meeting server 100 notifies the user U1 of the recommended group (second group) by transmitting output data for displaying the screen IM1 to the terminal device 10a.
- a chat box IN1 for transmitting character information to users of group GP1 is arranged below the information of group GP1 in screen IM1.
- a group list GL displaying a list of a plurality of groups is arranged on the right side of the chat box IN1 in the screen IM1.
- a rectangle surrounded by thick lines indicates group GP1 to which user U1 currently belongs, and a rectangle surrounded by dotted lines indicates group GP3 which is a recommended group for user U1.
- a group is specified by, for example, hovering over the group list GL, a list of participants of the specified group may be displayed as a popup.
- the notification area NT1 in the screen IM1 information on the group GP3, which is a recommended group for the user U1, is arranged.
- the information of group GP3 displayed in notification area NT1 includes images of users participating in group GP3.
- a move button BT1 for moving to group GP3 is arranged below the information of group GP3 in notification area NT1.
- a chat box IN2 for transmitting character information to the users of group GP3 is arranged below the movement button BT1 in the notification area NT1.
- Audio output in the terminal device 10 shown in FIG. 2 will be described.
- the terminal device 10 shown in FIG. 2 superimposes the voice of the group GP3, which is the recommended group (second group) for the user U1, on the voice of the group GP1, which is the first group of the user U1, and outputs the superimposed voice.
- the terminal device 10 shown in FIG. 2 superimposes the voice of the group GP3 at a second volume that is lower than the first volume of the group GP1 to which the user U1 belongs.
- the remote meeting server 100 makes the second volume of the group GP3, which is the recommended group (second group) for the user U1, lower than the first volume of the group GP1, which is the first group of the user U1.
- a parameter is set and notified to the terminal device 10a.
- FIG. 2 is only an example, and any display mode can be adopted for the display to the bystanders.
- FIG. 3 is a diagram showing another example of display for onlookers.
- the same reference numerals are assigned to the same points as in FIG.
- the terminal device 10 may display information indicating what conversations are taking place in group GP3, which is a recommended group.
- the terminal device 10 pops up the group keyword KW indicating what kind of conversation is being held in the group GP3 on the screen.
- "Animation XX” which is the group keyword KW of group GP3, is a keyword indicating information (topic) indicating what kind of conversation is being conducted in group GP3.
- "animation XX” is abstractly indicated, but “animation XX” is a character string indicating a specific title of the animation.
- FIG. 4 is a diagram showing an example of display to the parties.
- FIG. 4 shows, as an example, a display on the terminal device 10 of the user U2 who is the party in FIG.
- the same reference numerals are assigned to the same points as in FIG.
- the display 15 of the terminal device 10 shown in FIG. 4 displays a screen IM2 that does not include information on recommended groups for the user U2.
- the screen IM2 differs from the screen IM1 in that the notification area NT1 for notifying the user of the recommended group is not included.
- the information of the group GP1, which is the group (first group) to which the user U2 belongs may be enlarged to the notification area NT1 in FIG. 2 and displayed.
- the display of the notification area NT1 may be folded and hidden, or the space may be reduced.
- the notification area NT1 may be displayed at the transition timing.
- the remote meeting system 1 in a remote meeting in which a large number of people participate, speech recognition of each participant's remarks is performed to estimate the topic being talked about, and each participant finds a participant who is close to his or her tastes and interests. audible.
- the participants may be arranged at different spatial positions, and display and audio output may be performed according to the positional relationship, but this point will be described later.
- each person's preferences may be registered in advance or may be estimated from statements made at that time.
- the remote meeting system 1 in a system in which conversations are held in a plurality of groups, the above topics are summarized for each group, and the conversations exchanged in the groups are shared by the groups in which the user himself (himself) participates. Make it superimposed on the conversation in the .
- the remote meeting system 1 can realize a so-called cocktail party effect in a remote meeting.
- the remote meeting system 1 by controlling how the voice is superimposed and how it appears on the screen according to the situation such as when the speaker or listener, the participants can easily Know what other groups are interested in, and move between groups to join the conversation.
- a group in the present application means a small conference space virtually prepared in a remote meeting in which each user (terminal device 10) participates. Equivalent to breakout rooms in In addition, although a plurality of users often participate in a group, there may be a group in which only one user participates.
- the participation states of users participating in a remote meeting are divided into a state of participating as a party actively speaking and exchanging (state 1), and a sideways state of listening to others talking. There are two states (second state) in which the user participates as a participant. One participant participates in the meeting while transitioning between these two states.
- the remote meeting system 1 can estimate the state of participation as a party (first state) based on the following information. For example, in the remote meeting system 1, estimation can be made based on the frequency with which the user speaks (himself). In this case, the remote meeting system 1 presumes that a user whose frequency of speaking in a certain group is equal to or greater than a threshold is a participant in that group. Also, the remote meeting system 1 presumes that a user whose frequency of speaking in a certain group during a predetermined period is less than a threshold is a bystander of that group. Note that since some people are naturally less talkative, the remote meeting system 1 may change the threshold for each user.
- the remote meeting system 1 confirms that the microphone is kept muted, that other people continue to speak on the topic that the user himself (himself) has spoken, that he/she responds to other people's comments, Based on information such as the fact that the user is nodding, or that another person is speaking to him/herself on the screen (for example, the line of sight is directed at that part), the user's participation state can be determined as a participant or a bystander. You may presume which of the
- the user's participation status may be appropriately determined using various information.
- a user's participation status may be static, such as being preset.
- the remote meeting system 1 may generate a model for estimating the user's participation state by machine learning based on various information, and estimate the user's participation state using the generated model. Also, participation in each group may require the approval of existing participants.
- the remote meeting system 1 determines a group to recommend to each user by matching the contents of conversations in each group and the results of estimating preferences of each user.
- the remote meeting system 1 the user's speech is converted into text by voice recognition for each terminal device 10. Then, the remote meeting system 1 estimates the preference of each terminal user.
- the user's speech history (history of voice recognition results), interests registered in advance by the user, and behavior on devices such as PCs and smartphones such as home page browsing history similar to web advertisements are used to estimate preferences.
- Information such as history, real activity history such as jogging or aerobics (simply referred to as “aerobics”) by a smart tracker using a vital sensor or the like is used.
- the user's utterance history is the result of speech recognition of the user's utterance on the system and the history of keywords extracted from it. Also, the action history includes a history of group switching (moving) and topics in the group at that time.
- keywords and categories such as professional baseball, politics, jazz, and the Beatles. This may be selected from categories and keywords prepared in advance by the system, or may be set by the user as a free keyword.
- the remote meeting system 1 collects, for each group, the text of speech recognition results of user utterances participating in the group. Then, the remote meeting system 1 estimates topics for each group from the history of speech for each group.
- the remote meeting system 1 calculates the degree of similarity between the user using each terminal device 10 and each group from the preference estimation result of each terminal user and the topic estimation result of each group.
- a method of calculating the degree of similarity it is possible to use a document classification technique in natural language processing.
- the remote meeting system 1 uses the words that appear in the utterances and interests of users using each terminal device 10, the homepages that they browse, etc., and the words that appear in the utterance history of each group to calculate the cosine similarity may be used to calculate the similarity.
- the remote meeting system 1 uses machine learning technology such as deep learning to convert these words and sentences into vector embedded expressions, and uses this to recursively estimate the similarity.
- a model may be constructed by learning and the similarity may be calculated using the model. Note that the above is merely an example, and the remote meeting system 1 may calculate the similarity by any method as long as the similarity between the user and each group can be calculated.
- the remote meeting system 1 selects a group with the highest degree of similarity for each user using the terminal device 10 and presents it as a recommended group.
- the remote meeting system 1 may determine a threshold, and if there is no group with a degree of similarity exceeding the threshold for a certain user, it may be determined that there is no group recommended for that user. If it is determined that the recommended group does not exist, the remote meeting system 1 does not notify the user of the recommended group.
- Notification mode Next, examples of notification modes will be described. First, a method of presenting (notifying) a recommended group will be described.
- Recommended groups can be presented mainly by voice presentation, image presentation, and a combination of these.
- the remote meeting system 1 can change the volume and combine.
- the remote meeting system 1 superimposes mainly the voice of the group in which the user (himself) participates and the voice of the recommended group at a lower volume so that the conversations of other groups can also be heard. do.
- the remote meeting system 1 reproduces the conversation of each group by superimposing the conversation of the group in which the user himself (himself) participates so that the conversation can be heard from the front and the conversation of the recommended group can be heard from the right side.
- the remote meeting system 1 displays the camera images of the participants of the group in which the user (himself) participates in the center of the screen and the camera images of the recommended group participants in small sizes at the edges.
- the display of the recommended group can be devised such that it is folded and hidden at the timing that is not suitable for recommendation, in relation to the presentation timing described later.
- the remote meeting system 1 As a second method, in the remote meeting system 1, a plurality of chat spaces are prepared in a party hall-like image, and camera images of participants in the same chat space as the user himself/herself are displayed. In addition, it is possible to display the chat space of the recommended group next to each other on the screen. By combining the first method and the second method, the remote meeting system 1 can also display camera images of the recommended group participants in a small size.
- the direction from which the voices are heard corresponds to the direction in which the group is displayed on the screen.
- the former audio can be heard from the front and the latter audio can be heard from the right side.
- the remote meeting system 1 displays the face image of the latter participant on the right side of the face image of the former participant on the screen.
- the remote meeting system 1 makes the voice of the group audible from the direction in which the group is conversing at the party venue. Details on this point will be described later.
- the notification timing of the recommended group changes depending on the user's participation status and whether or not there is a group recommended to the user.
- the timing of notification is effective when the user participates in the group as a bystander and when there is a recommended group.
- it is also effective to fold the display of the recommended group so that it is not displayed when the user becomes a party concerned. How much the user wants to be presented with recommended groups depends on the user. Therefore, the remote meeting system 1 may, for example, present the topic of the group in which the user is participating by pressing a button when the topic is boring, or may present the recommended group all the time. , can be set by the user.
- the remote meeting system 1 when a participant moves, various aspects can be adopted for how the group is seen by other participants. There are several possible methods for notifying other participants when a participant moves in the group.
- the notification may be the same as when the number of new participants increases. For example, notifying nothing, notifying a participation message such as “Mr. You seem to be interested in the topic and participate.” Also, at this time, if there are many participants meeting for the first time at a meeting such as a party, the remote meeting system 1 may notify the profile information of the user himself (himself) who has moved.
- Voice and text are generally used as a means of sending messages to a group in which a user participates. Since different groups are talking about different topics, it is not common to send the same message to multiple groups at the same time, but it is possible to switch between multiple groups and send a message. is. As a result, it is possible to consider a communication style in which the currently participating group continues to exchange voice messages, and the recommended group is sent a text message. Through the above-described processing, the remote meeting system 1 can notify the participants participating in the remote meeting of appropriate information.
- remote meeting system 1 includes remote meeting server 100 and a plurality of terminal devices 10 .
- the remote meeting server 100 and each of the plurality of terminal devices 10 are communicably connected by wire or wirelessly via a predetermined communication network (network N).
- FIG. 5 is a diagram illustrating a configuration example of a remote meeting system according to the embodiment; Although only three terminal devices 10 are illustrated in FIG. 5, the remote meeting system 1 includes terminal devices 10 in a number equal to or greater than the number of users participating in the remote meeting. Also, the remote meeting system 1 shown in FIG. 5 may include a plurality of remote meeting servers 100 .
- the terminal device 10 is a device used by users who participate in remote meetings.
- the terminal device 10 outputs information regarding the remote meeting.
- the terminal device 10 displays an image (video) of the remote meeting and outputs audio of the remote meeting.
- the terminal device 10 transmits user's speech and images (video) to the remote meeting server 100 and receives voice and images (video) of the remote meeting from the remote meeting server 100 .
- the terminal device 10 accepts input from the user.
- the terminal device 10 receives voice input by user's utterance and input by user's operation.
- the terminal device 10 may be any device as long as it can implement the processing in the embodiments.
- the terminal device 10 may be any device as long as it has a function of displaying information about a remote meeting and outputting audio.
- the terminal device 10 may be a notebook PC (Personal Computer), a tablet terminal, a desktop PC, a smart phone, a smart speaker, a television, a mobile phone, a PDA (Personal Digital Assistant), or other device.
- the terminal device 10 also has a voice recognition function.
- the terminal device 10 has functions of natural language understanding (NLU) and automatic speech recognition (ASR).
- the terminal device 10 may have software modules for speech signal processing, speech recognition, speech semantic analysis, dialog control, and the like.
- the terminal device 10 may convert the user's utterance into text, and use the text-converted utterance (that is, character information of the utterance) to estimate the content of the user's utterance and preferences.
- the terminal device 10 communicates with a speech recognition server having natural language understanding and automatic speech recognition functions, and obtains from the speech recognition server utterances converted into text by the speech recognition server and information indicating the content of the estimated utterances and preferences. You may
- the remote meeting system 1 may include an administrator terminal used by the host (administrator) of the remote meeting (meeting).
- one terminal device 10 among the plurality of terminal devices 10 may be the administrator terminal used by the administrator.
- an administrator may operate the terminal device 10, which is an administrator terminal, to invite users to a meeting or divide users participating in the meeting into groups.
- the remote meeting server 100 is a computer used to provide remote meeting services to users.
- the remote meeting server 100 is an information processing device that, when the user is a bystander, notifies a group other than the group to which the user belongs and which matches the taste of the user.
- the remote meeting server 100 may have speech recognition functions such as natural language understanding and automatic speech recognition.
- the remote meeting server 100 may convert the user's utterance into text, and use the text-converted utterance (that is, character information of the utterance) to estimate the content of the user's utterance and preferences.
- FIG. 6 is a diagram illustrating a configuration example of a remote meeting server according to an embodiment of the present disclosure
- the remote meeting server 100 has a communication section 110, a storage section 120, and a control section .
- the remote meeting server 100 includes an input unit (for example, a keyboard, a mouse, etc.) for receiving various operations from the administrator of the remote meeting server 100, and a display unit (for example, a liquid crystal display, etc.) for displaying various information. may have.
- the communication unit 110 is implemented by, for example, a NIC (Network Interface Card) or the like.
- the communication unit 110 is connected to the network N (see FIG. 5) by wire or wirelessly, and transmits and receives information to and from other information processing devices such as the terminal device 10 . Also, the communication unit 110 may transmit and receive information to and from a user terminal (not shown) used by the user.
- the storage unit 120 is implemented by, for example, a semiconductor memory device such as RAM (Random Access Memory) or flash memory, or a storage device such as a hard disk or optical disk.
- the storage unit 120 according to the embodiment has a user information storage unit 121 and a group information storage unit 122, as shown in FIG.
- the user information storage unit 121 stores various information about users. For example, the user information storage unit 121 stores information of users participating in the remote meeting. The user information storage unit 121 stores information about each user's interests and preferences. The user information storage unit 121 stores the preference estimation result for each user in association with the user. The user information storage unit 121 stores user information corresponding to information identifying each user (user ID, etc.) in association with each other.
- the group information storage unit 122 stores various information about groups.
- the group information storage unit 122 stores various types of information about multiple groups corresponding to each of the multiple groups in the remote meeting.
- FIG. 7 is a diagram illustrating an example of a group information storage unit according to an embodiment of the present disclosure;
- FIG. 7 shows an example of the group information storage unit 122 according to the embodiment.
- the group information storage unit 122 includes items such as "group ID”, "topic”, “participant", and "status".
- Group ID indicates identification information for identifying a group.
- Topic indicates a topic in the corresponding group.
- topic indicates a topic estimated based on the content of dialogue in the group.
- FIG. 6 shows an example in which conceptual information such as 'GT1' is stored in 'Topic', but in reality, specific information such as keywords indicating topic topics in groups such as 'anime' and 'Olympics' are stored. is stored.
- Participant indicates a group participant. For example, “participant” stores information (user ID, etc.) that identifies a participant (user) belonging to the group. “Status” indicates the status of each participant. FIG. 7 shows the case where either the “participant” or the “bystander” is stored in the “status", but any setting is possible for the status other than the "participant” and the "bystander".
- the group (group GP1) identified by the group ID "GP1” includes, as participants, a user (user U1) identified by the user ID "U1” and a user identified by the user ID "U2". (user U2), the user (user U3) identified by the user ID "U3", the user (user U4) identified by the user ID "U4", and the like belong. It also indicates that the topic of group GP1 is topic GT1. In group GP1, two users U2 and U4 are participants and two users U1 and U3 are spectators at that time.
- group information storage unit 122 may store various information not limited to the above, depending on the purpose.
- the storage unit 120 may store various information other than the above.
- the storage unit 120 stores various information regarding remote meetings.
- the storage unit 120 stores various data for providing output data to the terminal device 10 .
- the storage unit 120 stores various types of information used to generate information displayed on the terminal devices 10 of users participating in the remote meeting.
- the storage unit 120 stores information about content displayed by an application (remote meeting application, etc.) installed in the terminal device 10 .
- the storage unit 120 stores information about content displayed by the remote meeting app. Note that the above is merely an example, and the storage unit 120 may store various types of information used to provide remote meeting services to users.
- the control unit 130 uses a CPU (Central Processing Unit), an MPU (Micro Processing Unit), etc. to store programs (for example, an information processing program according to the present disclosure) stored inside the remote meeting server 100 in RAM (Random Access Memory) or the like as a work area. Also, the control unit 130 is implemented by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- the control unit 130 includes an acquisition unit 131, a data storage unit 132, a group management unit 133, a topic matching unit 134, an output generation unit 135, and a notification unit 136. implements or performs the information processing functions and operations described in .
- the internal configuration of the control unit 130 is not limited to the configuration shown in FIG. 6, and may be another configuration as long as it performs information processing described later.
- the connection relationship between the processing units of the control unit 130 is not limited to the connection relationship shown in FIG. 6, and may be another connection relationship.
- the acquisition unit 131 acquires various types of information. Acquisition unit 131 acquires various types of information from an external information processing device. The acquisition unit 131 acquires various types of information from the terminal device 10 . The acquisition unit 131 acquires various types of information from the terminal device 10 detected by the operation unit 16 (described later in FIG. 7) of the terminal device 10 . The acquisition unit 131 acquires various information from the terminal device 10, such as information detected by the voice input unit 12 (described later in FIG. 7).
- the acquisition unit 131 acquires various types of information from the storage unit 120.
- the acquisition unit 131 acquires various types of information from the user information storage unit 121 and the group information storage unit 122 .
- the acquisition unit 131 acquires various information calculated by the data holding unit 132 .
- the acquisition unit 131 acquires various information determined by the group management unit 133 .
- the acquisition unit 131 acquires information about remote meetings.
- the acquisition unit 131 acquires information such as user's utterances and images.
- the acquisition unit 131 receives information such as utterances and images of the user using the terminal device 10 from the terminal device 10 .
- the acquisition unit 131 receives from the terminal device 10 information indicating the estimation result of the user's preference.
- the acquisition unit 131 receives information such as the user's image and voice, voice recognition results, and preference estimation results from the terminal device 10 .
- the data holding unit 132 executes processing related to data holding.
- the data holding unit 132 accumulates information such as image and voice, voice recognition results, and preference estimation results transmitted from each terminal device 10 .
- the data holding unit 132 stores information such as the image/audio, voice recognition results, and preference estimation results transmitted from each terminal device 10 in the storage unit 120 .
- the group management unit 133 executes processing related to group management.
- the group management unit 133 manages groups in which the users of the terminal devices 10 participate.
- the group management unit 133 uses the speech recognition results of the participants for each group to estimate the topic of the conversation being held in that group.
- the group management unit 133 functions as an estimation unit that performs estimation processing.
- the group management unit 133 estimates the participation status of each user participating in the group.
- a group management unit 133 divides a plurality of participants participating in a remote meeting into a plurality of groups, enables conversation within each group, and estimates topics of conversation in the group from statements of participants belonging to the group.
- the group management unit 133 estimates the participation state of one participant in the first group.
- the group management unit 133 estimates whether the state of one participant is a first state, which is a state of actively participating, or a second state, which is a state other than the first state.
- the topic matching unit 134 executes processing related to topic matching.
- the topic matching unit 134 executes matching processing using the speech recognition result and the preference estimation result from each terminal device 10 and the topic estimation result for each group estimated by the group management unit 133 .
- the topic matching unit 134 matches, in addition to groups in which the user of each terminal device 10 participates, groups whose topics are similar to the preferences of the user.
- the topic matching unit 134 performs matching processing based on the similarity between the keyword indicating the user's preference and the keyword indicating the topic of each group (second group) other than the group (first group) to which the user belongs.
- the topic matching unit 134 determines whether or not the comparison result between the keyword indicating the user's preference and the keyword indicating the topic of each second group satisfies the condition regarding similarity.
- the topic matching unit 134 determines the group having the highest degree of similarity among the plurality of second groups as a group (notification group) to be notified to the user.
- the topic matching unit 134 may use a threshold. In this case, the topic matching unit 134 determines whether or not the degree of similarity between the keyword indicating the user's preference and the keyword indicating the topic of each second group is equal to or greater than a predetermined threshold. The topic matching unit 134 determines a group (notification group) to notify the user of a group having a degree of similarity equal to or higher than a predetermined threshold among the plurality of second groups. For example, if there are a plurality of groups whose similarity is equal to or greater than a predetermined threshold among the plurality of second groups, the topic matching unit 134 determines the group having the highest similarity as a group (notification group) to be notified to the user. .
- the output generation unit 135 generates various types of information.
- the output generation unit 135 generates various types of information based on information from an external information processing device and information stored in the storage unit 120 .
- the output generation unit 135 generates various types of information based on information from other information processing devices such as the terminal device 10 .
- the output generation unit 135 generates various types of information based on the information stored in the user information storage unit 121 and the group information storage unit 122.
- the output generator 135 generates various information to be displayed on the terminal device 10 based on the information determined by the group manager 133 .
- the output generation unit 135 organizes information into a required output form, including the voices and images of the group in which each terminal device 10 user participates and the voices and images of the group matched by the topic matching unit 134. .
- the output generation unit 135 adjusts various parameters according to the group to which the user belongs, the participation state of the user, and the like.
- the output generator 135 adjusts the parameters depending on whether the user is a participant or a bystander.
- the output generation unit 135 may generate information to be provided to the terminal device 10 using various techniques related to remote meetings such as web conferences.
- the output generation unit 135 generates output data to be provided to the terminal device 10 using the adjusted parameters.
- the output generation unit 135 generates output data used for information output of remote meetings in the terminal device 10 .
- the output generation unit 135 generates output data including parameters indicating the sound volume of each group and the arrangement position and size of the image of each group.
- the output generation unit 135 generates output data in which the voices of the participants in the second group are superimposed on the voices of the participants in the first group.
- the output generation unit 135 generates output data for superimposing the voices of the participants of the second group at a second volume lower than the first volume of the first group.
- the output generation unit 135 generates output data for displaying the images of the participants of the second group together with the images of the participants of the first group.
- the output generation unit 135 generates output data for displaying the first group and the second group so as to be identifiable.
- the output generation unit 135 generates output data for displaying the images of the participants of the second group in a second area smaller than the first area displaying the images of the participants of the first group.
- the output generation unit 135 When notifying the voices and images of the participants in the second group, the output generation unit 135 generates output data that notifies the correspondence between the voices and images of the participants in the second group in a recognizable manner. The output generation unit 135 generates output data for notification in such a manner that the voices of the participants in the second group are output from the direction corresponding to the relationship between the display positions of the first group and the second group.
- the output generation unit 135 may generate a display screen (content) to be displayed on the terminal device 10 as output data.
- the output generation unit 135 may generate a screen (content) to be provided to the terminal device 10 by appropriately using various technologies such as Java (registered trademark).
- the output generation unit 135 may generate a screen (content) to be provided to the terminal device 10 based on CSS, JavaScript (registered trademark), or HTML format.
- the output generation unit 135 may generate screens (contents) in various formats such as JPEG (Joint Photographic Experts Group), GIF (Graphics Interchange Format), and PNG (Portable Network Graphics).
- the notification unit 136 executes processing related to notification to the user.
- the notification unit 136 transmits information to be notified to the terminal device 10 .
- the notification unit 136 transmits the information generated by the output generation unit 135 to the terminal device 10 .
- the notification unit 136 transmits the output data generated by the output generation unit 135 to the terminal device 10 .
- the notification unit 136 transmits information indicating groups matched by the topic matching unit 134 to the terminal device 10 . When the topic matching unit 134 determines the second group for the user and the user is a bystander, the notification unit 136 determines that the conditions are satisfied and notifies the user.
- the notification unit 136 Notify the second group.
- the notification unit 136 changes the notification response regarding the second group according to the participation state of one participant in the first group estimated by the group management unit 133 .
- the notification unit 136 changes the mode of notification of the second group to one participant according to whether the one participant is in the first state or the second state.
- the notification unit 136 changes the notification timing of the second group to one participant depending on whether the one participant is in the first state or the second state.
- the notification unit 136 changes the display mode of the second group for one participant depending on whether the one participant is in the first state or the second state.
- the notification unit 136 notifies the second group to the one participant when the one participant is in the second state.
- the notification unit 136 notifies the one participant of the second group at the timing when the participation state of the one participant transitions from the first state to the second state.
- the notification unit 136 superimposes the voices of the participants in the second group on the voices of the participants in the first group.
- the notification unit 136 superimposes the voices of the participants of the second group at a second volume that is lower than the first volume of the first group.
- the notification unit 136 displays the images of the participants of the second group together with the images of the participants of the first group.
- the notification unit 136 displays the first group and the second group so as to be identifiable.
- the notification unit 136 displays the images of the participants of the second group in a second area that is smaller than the first area that displays the images of the participants of the first group.
- the notification unit 136 When notifying the voices and images of the participants in the second group, the notification unit 136 notifies the correspondence between the voices and images of the participants in the second group in a recognizable manner. The notification unit 136 notifies in a manner in which the voices of the participants in the second group are output from the direction corresponding to the relationship between the display positions of the first group and the second group.
- the notification unit 136 sends the second group keyword to the one participant. Notify groups.
- the notification unit 136 notifies the second group to the one participant when the comparison result between the group keyword of the second group and the participant keyword of the one participant satisfies the similarity condition.
- the notification unit 136 notifies one participant of the second group when the degree of similarity between the group keyword of the second group and the participant keyword of one participant is greater than or equal to a predetermined threshold.
- FIG. 8 is a diagram illustrating a configuration example of a terminal device according to an embodiment of the present disclosure.
- the terminal device 10 includes a communication unit 11, an audio input unit 12, an audio output unit 13, a camera 14, a display 15, an operation unit 16, a storage unit 17, and a control unit 18. and
- the communication unit 11 is implemented by, for example, a NIC, a communication circuit, or the like.
- the communication unit 11 is connected to a predetermined communication network (network) by wire or wirelessly, and transmits and receives information to and from an external information processing device.
- the communication unit 11 is connected to a predetermined communication network by wire or wirelessly, and transmits and receives information to and from the remote meeting server 100 .
- the voice input unit 12 is, for example, a microphone, etc., and detects voice.
- the voice input unit 12 detects user's speech.
- the voice input unit 12 may have any configuration as long as it can detect the user's speech information necessary for processing.
- the audio output unit 13 is realized by a speaker that outputs audio, and is an output device for outputting various types of information as audio.
- the audio output unit 13 audio-outputs content provided from the remote meeting server 100 .
- the audio output unit 13 outputs audio corresponding to information displayed on the display 15 .
- the terminal device 10 performs voice input/output using a voice input section 12 and a voice output section 13 .
- the camera 14 has an image sensor (image sensor) that detects images.
- a camera 14 photographs the users participating in the remote meeting.
- the terminal device 10 is a notebook computer
- the camera 14 may be built into the terminal device 10 and arranged above the display 15 .
- the camera 14 may be an in-camera built into the terminal device 10 .
- the display 15 is a display screen of a tablet terminal realized by, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display, and is a display device (display unit) for displaying various information.
- a display screen of a tablet terminal realized by, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display, and is a display device (display unit) for displaying various information.
- the display 15 displays various information regarding remote meetings in which the user is participating.
- the display 15 displays content.
- the display 15 displays various information received from the remote meeting server 100 .
- the display 15 outputs remote meeting information received from the remote meeting server 100 .
- the display 15 displays information on groups to which the user belongs.
- the terminal device 10 inputs and outputs images using the camera 14 and the display 15 .
- the operation unit 16 accepts various user operations.
- the operation unit 16 is a keyboard, mouse, or the like.
- the operation unit 16 may have a touch panel capable of realizing functions equivalent to those of a keyboard and a mouse.
- the operation unit 16 receives various operations from the user via the display screen using the functions of a touch panel realized by various sensors.
- the operation unit 16 receives various operations from the user via the display 15 .
- the operation unit 16 receives operations such as user designation operations via the display 15 of the terminal device 10 .
- the tablet terminal mainly adopts the capacitance method, but there are other detection methods such as the resistive film method, the surface acoustic wave method, the infrared method, and the electromagnetic induction method. Any method may be adopted as long as the user's operation can be detected and the function of the touch panel can be realized.
- the terminal device 10 may have a configuration that accepts (detects) various types of information as input, not limited to the above.
- the terminal device 10 may have a line-of-sight sensor that detects the user's line of sight.
- the line-of-sight sensor detects the user's line-of-sight direction using eye-tracking technology based on the detection results of the camera 14 mounted on the terminal device 10, an optical sensor, and a motion sensor (all not shown).
- the line-of-sight sensor determines a region of the screen that the user is gazing at based on the detected line-of-sight direction.
- the line-of-sight sensor transmits line-of-sight information including the determined gaze area to the remote meeting server 100 .
- the terminal device 10 may have a motion sensor that detects a user's gesture or the like.
- the terminal device 10 may receive an operation by a user's gesture using a motion sensor.
- the storage unit 17 is implemented by, for example, a semiconductor memory device such as RAM (Random Access Memory) or flash memory, or a storage device such as a hard disk or optical disk.
- the storage unit 17 stores various information received from the remote meeting server 100, for example.
- the storage unit 17 stores, for example, information about applications (for example, remote meeting applications, etc.) installed in the terminal device 10, such as programs.
- the storage unit 17 stores user information.
- the storage unit 17 stores various kinds of information used for user preference estimation.
- the storage unit 17 stores the user's utterance history (speech recognition result history) and action history.
- the storage unit 17 also stores items of interest registered in advance by the user.
- the storage unit 17 stores a history of actions performed by the user using a device such as the terminal device 10 .
- the storage unit 17 stores a user's action history on the Internet, such as a homepage browsing history.
- the storage unit 17 also stores a history of real user activities such as jogging and aerobics performed by a smart tracker using a vital sensor or the like.
- the user's utterance history is the result of speech recognition of the user's utterance on the system and the history of keywords extracted from it.
- the action history includes a history of group switching (change) and topics in the group at that time.
- items of interest registered in advance by the user items that the user is interested in on a daily basis are registered as keywords or categories, such as professional baseball, politics, jazz, and the Beatles.
- the category or keyword may be selected from those prepared in advance by the system, or the user may be able to set a free keyword.
- the storage unit 17 stores information of a speech recognition application (program) that implements the speech recognition function.
- the terminal device 10 can perform speech recognition by activating a speech recognition application (simply referred to as “speech recognition”).
- the storage unit 17 stores various information used for speech recognition.
- the storage unit 17 stores information of a dictionary (speech recognition dictionary) used for the speech recognition dictionary.
- the storage unit 17 stores information of a plurality of speech recognition dictionaries.
- the control unit 18 is a controller.
- various programs stored in a storage device such as the storage unit 17 inside the terminal device 10 are executed by the CPU, MPU, or the like using the RAM as a work area. Realized.
- these various programs include programs of applications (for example, remote meeting applications) that perform information processing.
- the control unit 18 is a controller, and is realized by an integrated circuit such as ASIC or FPGA, for example.
- control unit 18 includes an acquisition unit 181, a speech recognition unit 182, a preference estimation unit 183, and a processing unit 184, and implements or implements the information processing functions and actions described below. Run.
- the internal configuration of the control unit 18 is not limited to the configuration shown in FIG. 8, and may be another configuration as long as it performs the information processing described later.
- the connection relationship between the processing units of the control unit 18 is not limited to the connection relationship shown in FIG. 8, and may be another connection relationship.
- the acquisition unit 181 acquires various types of information. For example, the acquisition unit 181 acquires various information from an external information processing device. For example, the acquisition unit 181 stores the acquired various information in the storage unit 17 . The acquisition unit 181 acquires user operation information accepted by the operation unit 16 .
- the acquisition unit 181 acquires the user's utterance information.
- the acquisition unit 181 acquires user utterance information detected by the voice input unit 12 .
- the acquisition unit 181 receives information from the remote meeting server 100 via the communication unit 11 .
- Acquisition unit 181 receives information provided by remote meeting server 100 .
- Acquisition unit 181 receives content from remote meeting server 100 .
- the voice recognition unit 182 executes various processes related to voice recognition.
- the speech recognition unit 182 uses the information stored in the storage unit 17 to perform speech recognition processing.
- the voice recognition unit 182 converts the user's utterance into text information, thereby converting the voice of the user's utterance into text.
- the speech recognition unit 182 can be realized by using existing speech semantic analysis technology.
- the speech recognition unit 182 analyzes the content of the user's utterance.
- the voice recognition unit 182 estimates the content of the user's utterance by analyzing the user's utterance using various conventional techniques as appropriate. For example, the speech recognition unit 182 analyzes the content of the user's utterances using natural language understanding (NLU) and automatic speech recognition (ASR) functions.
- NLU natural language understanding
- ASR automatic speech recognition
- the speech recognition unit 182 uses character information corresponding to the user's utterance to estimate (specify) the content of the user's utterance through semantic analysis. For example, the speech recognition unit 182 analyzes the character information using various conventional techniques such as syntax analysis as appropriate, thereby estimating the content of the user's utterance corresponding to the character information.
- the preference estimation unit 183 executes processing for estimating the user's preferences.
- the preference estimation unit 183 estimates the user's preference using the user's action history.
- the preference estimation unit 183 estimates the user's preference using the user's speech history.
- the preference estimation unit 183 estimates the user's preference based on the user's utterance.
- the preference estimation unit 183 estimates the user's current preferences by using various types of information such as textual utterance contents and pre-registered user interests.
- the preference estimation unit 183 uses various information stored in the storage unit 17 to estimate the user's preferences. For example, the preference estimation unit 183 estimates the user's preference using the user's utterance history (speech recognition result history) and action history. In addition, the preference estimation unit 183 estimates the user's preference using items of interest registered in advance by the user. The preference estimation unit 183 estimates the user's preference using the history of actions performed by the user using a device such as the terminal device 10 . The preference estimation unit 183 estimates the user's preference using the user's action history on the Internet, such as the home page browsing history. Also, the preference estimation unit 183 estimates the user's preference using a real user's activity history such as jogging and aerobics by a smart tracker using a vital sensor or the like.
- a real user's activity history such as jogging and aerobics by a smart tracker using a vital sensor or the like.
- the processing unit 184 executes various types of processing.
- the processing unit 184 displays various information via the display 15 .
- the processing unit 184 controls display on the display 15 .
- the processing unit 184 outputs various kinds of information as voice through the voice output unit 13 .
- the processing unit 184 controls audio output of the audio output unit 13 .
- the processing unit 184 outputs the information received by the acquisition unit 181.
- the processing unit 184 outputs content provided by the remote meeting server 100 .
- Processing unit 184 outputs the content received by acquisition unit 181 via audio output unit 13 or display 15 .
- the processing unit 184 displays content via the display 15 .
- the processing unit 184 outputs the contents as audio through the audio output unit 13 .
- the processing unit 184 functions as a notification unit that notifies the user of the second group when the topic of the second group and the user's preference satisfy a similarity condition.
- the processing unit 184 notifies the user of the recommended group via the audio output unit 13 or the display 15 .
- the processing unit 184 transmits various information to an external information processing device via the communication unit 11 .
- the processing unit 184 transmits various information to the remote meeting server 100 .
- the processing unit 184 transmits various information stored in the storage unit 17 to an external information processing device.
- the processing unit 184 transmits various information acquired by the acquisition unit 181 to the remote meeting server 100 .
- the processing unit 184 transmits the sensor information acquired by the acquisition unit 181 to the remote meeting server 100 .
- the processing unit 184 transmits the user operation information received by the operation unit 16 to the remote meeting server 100 .
- the processing unit 184 transmits information such as speech and images of the user using the terminal device 10 to the remote meeting server 100 .
- each process performed by the control unit 18 described above may be implemented by, for example, JavaScript (registered trademark).
- each unit of the control unit 18 may be realized by the predetermined application, for example.
- processing such as information processing by the control unit 18 may be realized by control information received from an external information processing device.
- the control unit 18 may have an application control unit that controls the predetermined application or dedicated application, for example.
- FIG. 9 is a flow chart showing the processing procedure of the information processing device according to the embodiment of the present disclosure. Specifically, FIG. 9 is a flowchart showing the procedure of information processing by the remote meeting server 100, which is an example of an information processing apparatus.
- the remote meeting server 100 estimates topics of conversation in the group from statements of participants belonging to the group (step S101).
- the remote meeting server 100 determines whether the topic of the second group, which is a group other than the first group to which the one participant belongs, and the interest of the one participant satisfy a similarity condition (step S102). . If the topic of the second group and the interest of one participant satisfy the similarity condition (step S102: Yes), the remote meeting server 100 notifies the one participant of the second group (step S103). .
- step S102 if the topic of the second group and the interest of one participant do not satisfy the condition regarding similarity (step S102: No), the remote meeting server 100 does not notify the second group to the one participant. end the process. Note that even if the topic of the second group and the interest of one participant satisfy the similarity condition, the remote meeting server 100 is in a state where one participant is a party in the first group to which the remote meeting server 100 belongs. , the process may end without notifying the second group to one participant.
- FIG. 10 is a flow chart showing processing procedures in the terminal device according to the embodiment of the present disclosure.
- FIG. 10 is a flowchart showing an example of processing when the terminal device 10 provides a remote meeting-related service.
- the terminal device 10 holds the audio image data in the buffer (step S201).
- the terminal device 10 holds audio/image data including user's audio data, image data, etc. in a buffer.
- the terminal device 10 executes speech recognition processing on the speech data (step S202).
- the terminal device 10 determines whether or not the end of the voice is detected (step S203). It should be noted that detection of the end of speech may be performed using various conventional techniques as appropriate, and detailed description thereof will be omitted.
- the terminal device 10 uses the voice recognition result to estimate the topic (step S204). For example, the terminal device 10 uses the speech recognition result to estimate the topic (preference) of the user's utterance based on the content of the user's utterance. For example, the terminal device 10 extracts a keyword from character information obtained by converting a user's utterance into text, and presumes that the extracted keyword is the topic of the user's utterance, that is, the user's preference at that time.
- the terminal device 10 outputs the result to the buffer (step S205). For example, the terminal device 10 adds information (preference information) indicating the estimated preference to the buffer.
- the terminal device 10 outputs the data in the buffer to the server at regular intervals (step S206).
- the terminal device 10 transmits the data held in the buffer to the remote meeting server 100 at regular intervals. For example, if there is preference information in the buffer, the terminal device 10 transmits the buffered audio and image data and the preference information to the remote meeting server 100 .
- step S203 the terminal device 10 performs the process of step S206 without estimating the topic of the user's utterance (user's preference). For example, when there is no preference information in the buffer, the terminal device 10 transmits the audio/video data in the buffer to the remote meeting server 100 .
- the terminal device 10 may transmit information to the remote meeting server 100 at any timing as long as it can provide services related to remote meetings.
- FIG. 11 is a flow chart showing processing procedures of the remote meeting server according to the embodiment of the present disclosure.
- FIG. 11 is a flow chart showing an example of processing when the remote meeting server 100 provides a remote meeting-related service.
- the remote meeting server 100 receives data from each terminal and holds it in the data holding unit (step S301). For example, the remote meeting server 100 receives data from each terminal device 10 and stores the received data in the storage unit 120 by the data holding unit 132 .
- the remote meeting server 100 confirms the recommended group for each terminal (step S303). For example, the remote meeting server 100 confirms recommended groups for users using each terminal device 10 at predetermined intervals (for example, several seconds, several tens of seconds, etc.).
- the remote meeting server 100 estimates the user participation state of each terminal (step S304), and branches the processing according to the estimation result.
- step S304 the remote meeting server 100 updates the output generation parameters taking into consideration the recommended group (step S305). For example, the remote meeting server 100 updates the output generation parameters used to generate the output data to be transmitted to the terminal device 10 of the user to include the recommended group for the user. Then, remote meeting server 100 executes the process of step S306.
- step S304 party
- step S306 is executed without performing the process of step S305. That is, when the remote meeting server 100 estimates that the user's participation status is that of a participant, the remote meeting server 100 executes the process of step S306.
- the remote meeting server 100 generates output data for each terminal according to the output generation parameters (step S306).
- the remote meeting server 100 generates output data to be provided to the terminal device 10 of the user according to the presence or absence of a recommended group for the user.
- the remote meeting server 100 when the output generation parameter includes information on a recommended group, the remote meeting server 100 generates output data to be transmitted to the user's terminal device 10 with the contents of the recommended group for the user added. In this case, the remote meeting server 100 generates output data so that the recommended group for the user is displayed on the terminal device 10 of the user, as shown in FIG. 2, for example.
- the remote meeting server 100 generates output data to be sent to the terminal device 10 of the user without considering the recommended group for that user. .
- the remote meeting server 100 generates output data so that the recommended group for the user is not displayed on the terminal device 10 of the user, as shown in FIG. 4, for example.
- step S306 the remote meeting server 100 executes the process of step S306 without performing the processes of S303 to S305.
- the remote meeting server 100 generates output data to be transmitted to the terminal device 10 of the user, for example, without considering the recommended group for the user.
- the remote meeting server 100 transmits data to each terminal (step S307).
- the remote meeting server 100 transmits output data generated for each user to the terminal device 10 used by each user.
- FIG. 12 is a flow chart showing the matching processing procedure.
- the matching process shown in FIG. 12 corresponds to steps S202 to S204 in FIG. 10 and steps S301 to S303 in FIG. 11, for example.
- steps S301 to S303 in FIG. 11 for example.
- FIG. 12 a case where the remote meeting system 1 performs processing will be described as an example, but the processing shown in FIG. may be performed by any device.
- the remote meeting system 1 converts the utterance into text by voice recognition (step S401). For example, the remote meeting system 1 converts a user's utterance into character information by voice recognition.
- the remote meeting system 1 estimates the user's preferences (step S402). For example, the remote meeting system 1 extracts a keyword from character information obtained by converting a user's utterance into text, and presumes that the extracted keyword is the user's preference.
- the remote meeting system 1 aggregates the speech recognition results of each group participant (step S403). For example, the remote meeting system 1 collects character information in which participants' (users') utterances are converted into text for each group.
- the remote meeting system 1 estimates topics from the utterance history of each group (step S404). For example, the remote meeting system 1 extracts a keyword from text information in the utterance history of each group, and presumes that the extracted keyword is the topic of the group.
- the remote meeting system 1 calculates the degree of similarity between each user's preference estimation result and each group's topic (step S405). For example, the remote meeting system 1 calculates the similarity between each user's preference and the topic of a group (second group) other than the group (first group) to which the user belongs.
- the remote meeting system 1 determines the group with the highest degree of similarity to each user (step S406). For example, the remote meeting system 1 determines a group (second group) having the highest degree of similarity for each user as a recommended group.
- the output mode of the remote meeting is not limited to the examples shown in FIGS. 2 to 4, and various output modes may be used.
- the output mode may reflect the virtual position of each group or each user.
- FIG. 13 is a description of the display and the output mode of the audio output, so the description of the processing for determining the recommended group such as the matching processing that is the premise thereof will be omitted. Further, in FIG. 13, description of the same points as in FIGS. 2 to 4 will be omitted as appropriate.
- FIG. 13 is a diagram showing an example of a display mode.
- FIG. 13 shows, as an example, the display on the terminal device 10 of user U22 who is a bystander in group GP21.
- the display 15 of the terminal device 10 shown in FIG. 13 displays a screen IM21 including a virtual party venue PV21 reflecting the virtual positions of each group and each user, and a notification area NT21 for notifying the user U22 of recommended groups. be done.
- icons IC21 to IC23 corresponding to the users U21 to U23 of the group GP21 are arranged in the virtual party venue PV21.
- an icon IC22 corresponding to the user U22 is arranged below the table in the area AR21 where the group GP21 is arranged.
- an icon IC21 corresponding to the user U21 is arranged diagonally to the left of the icon IC22, and an icon IC23 corresponding to the user U23 is arranged diagonally to the right of the icon IC22.
- a group GP25 which is a recommended group (second group) for the user U22, is placed on a table in an area AR25 located on the left side of the area AR21 where the group GP21 is placed.
- icons IC51 and IC52 corresponding to the two users belonging to group GP25 are arranged on the table on the left side of the table on which group GP21 is arranged.
- information on group GP25 which is a recommended group for user U22, is arranged in notification area NT21 in screen IM21.
- the information of group GP25 displayed in notification area NT21 includes images of users participating in group GP25.
- a move button BT21 for moving to group GP25 is arranged below the information of group GP25 in notification area NT21. For example, when the user U22 selects the move button BT21, the group to which the user U22 belongs is changed from group GP21 to group GP25.
- a chat box IN21 for transmitting character information to the users of group GP25 is arranged below the movement button BT21 in the notification area NT21.
- Audio output in the terminal device 10 shown in FIG. 13 will be described.
- the terminal device 10 shown in FIG. 13 superimposes the voice of group GP25, which is the recommended group (second group) for user U22, on the voice of group GP21, which is the first group of user U22, and outputs the superimposed voice.
- the terminal device 10 outputs the voices of the participants of the second group from the direction corresponding to the relationship between the display positions of the first group and the second group.
- the terminal device 10 shown in FIG. 13 outputs the voice of the group GP25 from the left side because the group GP25 is located on the left side of the user U22 of the group GP21.
- the terminal device 10 adjusts the volume of the sound of the group GP25 so that the output sound volume in the left direction is higher than the output sound volume in the right direction, and outputs the sound of the group GP25.
- the remote meeting server 100 sets a parameter for the voice of the group GP25 so that the output volume in the left direction is higher than the output volume in the right direction, and notifies the terminal device 10a.
- the voices of the participants in the second group are output from the direction corresponding to the relationship between the display positions of the first group and the second group.
- Notification may be made in the manner in which For example, when the terminal device 10 has speakers on the left and right sides, the sound of the group GP25 may be output only from the speaker on the left side.
- the remote meeting system 1 notifies the voices and images of the participants in the second group, the correspondence between the voices and images of the participants in the second group is notified in a recognizable manner. Any form of notification may be used as long as it is possible.
- the terminal device 10 may be VR goggles, the terminal device 10 may display a virtual space such as a game, and output audio in a positional relationship corresponding to the displayed virtual space.
- VR Virtual Reality
- the terminal device 10 of the user U21 displays a screen that does not include information on groups recommended to the user U21 and the notification area NT21.
- the terminal device 10 of the user U21 may display a virtual party venue PV21 in which only the icons IC21 to IC23 corresponding to the users U21 to U23 of the group GP21 are arranged.
- the terminal device 10 performs voice recognition and preference estimation, and the remote meeting server 100 performs group topic estimation and matching processing. can adopt any aspect.
- the remote meeting server 100 may perform speech recognition and estimation of each user's preferences.
- each terminal device 10 has the speech recognition unit 182 and the preference estimation unit 183, but these may be collectively implemented by the server.
- the remote meeting server 100 holds information stored in the storage unit 17 and has the functions of a speech recognition unit 182 and a preference estimation unit 183 .
- the terminal device 10 functions as a so-called thin client, a device that transmits information such as user speech and images to the remote meeting server 100 and outputs information received from the remote meeting server 100 .
- the remote meeting system 1 may have a configuration of a so-called centralized system, such as a client-server system in which main processes are executed on the server side.
- the terminal device 10 may perform group topic estimation and matching processing.
- the terminal device 10 functions as an information processing device that notifies of recommended groups.
- the terminal device 10 holds information stored in the storage unit 120 and has functions of a data holding unit 132 , a group management unit 133 , a topic matching unit 134 , an output generation unit 135 and a notification unit 136 .
- the audio output unit 13 , the display 15 , the processing unit 184 and the like of the terminal device 10 function as the notification unit 136 .
- the terminal device 10 functions as a rich client that performs voice recognition, user preference estimation, group topic estimation, and matching processing.
- the remote meeting server 100 collects information from each terminal device 10 and provides each terminal device 10 with necessary information.
- the remote meeting system 1 may not include a remote meeting server.
- the remote meeting system 1 executes, for example, the main processing on the user's terminal (client) side, and the server only manages information related to the remote meeting, or a system configuration that does not include a server, a so-called autonomous system. It may be a distributed system configuration.
- the remote meeting system 1 may have any configuration, such as a centralized type or an autonomous decentralized type.
- the remote meeting system 1 may have any functional division mode and any device configuration as long as it can provide the above-described remote meeting-related service. good too.
- each component of each device illustrated is functionally conceptual and does not necessarily need to be physically configured as illustrated.
- the specific form of distribution and integration of each device is not limited to the one shown in the figure, and all or part of them can be functionally or physically distributed and integrated in arbitrary units according to various loads and usage conditions. Can be integrated and configured.
- the information processing apparatus includes an estimation unit (the group management unit 133 in the embodiment), a notification unit (the notification unit in the embodiment). 136).
- the estimating unit is configured such that a plurality of participants participating in a remote meeting are divided into a plurality of groups, each group can have a conversation, and a topic of the group conversation is estimated from statements of participants belonging to the group. If the topic of the second group, which is a group other than the first group, to which the one participant belongs, and the preference of the one participant satisfy the similarity condition, the notification unit sends the one participant the first Notify 2 groups.
- the information processing apparatus allows one participant (user) to belong to a group other than the group to which he/she belongs, according to the similarity between the topic of each group and the preference of one participant (user). By notifying the participants of the remote meeting, appropriate information can be notified.
- the estimation unit estimates the state of participation of one participant in the first group.
- the notification unit changes the notification response regarding the second group according to the participation state of the one participant in the first group estimated by the estimation unit.
- the information processing device can notify appropriate information to the participants participating in the remote meeting by changing the notification response according to the participation status of one participant.
- the estimation unit estimates whether the state of one participant is a first state, which is a state of actively participating, or a second state, which is a state other than the first state.
- the notification unit changes the mode of notification of the second group to the one participant according to whether the one participant is in the first state or the second state. In this way, the information processing apparatus determines whether the participation state of one participant is the first state, which is a state in which the participant is actively participating, or the second state, which is a state other than the first state. By changing the notification response accordingly, appropriate information can be notified to participants participating in the remote meeting.
- the notification unit changes the notification timing of the second group to the one participant depending on whether the one participant is in the first state or the second state.
- the information processing apparatus determines whether the participation state of one participant is the first state, which is a state in which the participant is actively participating, or the second state, which is a state other than the first state. By changing the notification timing accordingly, the participants participating in the remote meeting can be notified at appropriate timing.
- the notification unit changes the display mode of the second group for one participant depending on whether the one participant is in the first state or the second state.
- the information processing apparatus determines whether the participation state of one participant is the first state, which is a state in which the participant is actively participating, or the second state, which is a state other than the first state. By changing the display mode of the second group accordingly, appropriate information can be notified to the participants participating in the remote meeting.
- the notification unit notifies the second group to the one participant when the one participant is in the second state.
- the information processing apparatus notifies the second group when the one participant is in the second state, thereby indicating that the one participant is not actively participating in the first group. You can notify us when Therefore, the information processing device can notify the participants participating in the remote meeting of appropriate information.
- the notification unit notifies the second group to the one participant at the timing when the participation state of the one participant transitions from the first state to the second state.
- the information processing device notifies the second group at the timing when the one participant transitions from the first state to the second state, thereby encouraging the one participant to actively participate in the first group. Notification can be made at the timing when it is not in the state. Therefore, the information processing device can notify the participants participating in the remote meeting at appropriate timing.
- the notification unit superimposes the voices of the participants in the second group on the voices of the participants in the first group.
- the information processing device outputs the voice of the second group together with the voice of the first group in which one participant participates, thereby making it possible for the one participant to recognize both conversations. . Therefore, the information processing device can notify the participants participating in the remote meeting at appropriate timing.
- the notification unit superimposes the voices of the participants in the second group at a second volume that is lower than the first volume of the first group.
- the information processing device outputs the sound of the second group in a lower volume than the sound of the first group in which the one participant participates, so that the one participant participates. It is possible to allow one participant to recognize the voice of another group while concentrating on the conversation of the group. Therefore, the information processing device can notify the participants participating in the remote meeting at appropriate timing.
- the notification unit displays the images of the participants of the second group together with the images of the participants of the first group.
- the information processing device displays the images of the participants of the second group together with the images of the participants of the first group in which one participant participates, thereby allowing the one participant to participate in both. person can be recognized.
- the notification unit displays the first group and the second group so that they can be identified.
- the information processing apparatus can distinguishably recognize the group to which one participant belongs from other groups by displaying the first group and the second group in a distinguishable manner.
- the notification unit displays the images of the participants of the second group in a second area that is smaller than the first area that displays the images of the participants of the first group.
- the information processing device displays the first group in which one participant participates larger than the second group, thereby focusing on the group in which one participant participates.
- Other groups can also be recognized by one participant.
- the notification unit when notifying the voices and images of the participants in the second group, notifies the correspondence between the voices and images of the participants in the second group in a recognizable manner. In this way, the information processing apparatus can make one participant appropriately recognize the second group by notifying the corresponding relationship between the voices and images of the participants of the second group in a recognizable manner. Therefore, the information processing device can notify the participants participating in the remote meeting at appropriate timing.
- the notification unit notifies in a manner in which the voices of the participants in the second group are output from the direction corresponding to the relationship between the display positions of the first group and the second group.
- the information processing device notifies one participant by outputting the voice of the participant in the second group from a direction corresponding to the relationship between the display positions of the first group and the second group.
- the second group can be properly recognized. Therefore, the information processing device can notify the participants participating in the remote meeting at appropriate timing.
- the notification unit sends the first participant to the one participant. Notify 2 groups.
- the information processing apparatus uses a keyword to provide one participant (user) with a group other than the one to which the participant belongs, according to the similarity between the topic of each group and the preference of one participant (user). By notifying the group, appropriate information can be notified to the participants participating in the remote meeting.
- the notification unit notifies the second group to the one participant when the comparison result between the group keyword of the second group and the participant keyword of the one participant satisfies the similarity condition.
- the information processing device notifies one participant of a group other than the group to which he or she belongs, thereby participating in the remote meeting. Appropriate information can be notified to the person concerned.
- the notification unit notifies one participant of the second group. For example, if the similarity of the keywords is equal to or greater than a predetermined threshold, the information processing device notifies one participant of a group other than the group to which he or she belongs, so that the participants participating in the remote meeting can Appropriate information can be notified.
- FIG. 14 is a hardware configuration diagram showing an example of a computer 1000 that implements the functions of the information processing apparatus.
- the remote meeting server 100 according to the embodiment will be described below as an example.
- the computer 1000 has a CPU 1100 , a RAM 1200 , a ROM (Read Only Memory) 1300 , a HDD (Hard Disk Drive) 1400 , a communication interface 1500 and an input/output interface 1600 .
- Each part of computer 1000 is connected by bus 1050 .
- the CPU 1100 operates based on programs stored in the ROM 1300 or HDD 1400 and controls each section. For example, the CPU 1100 loads programs stored in the ROM 1300 or HDD 1400 into the RAM 1200 and executes processes corresponding to various programs.
- the ROM 1300 stores a boot program such as BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, and programs dependent on the hardware of the computer 1000.
- BIOS Basic Input Output System
- the HDD 1400 is a computer-readable recording medium that non-temporarily records programs executed by the CPU 1100 and data used by such programs.
- HDD 1400 is a recording medium that records an information processing program according to the present disclosure, which is an example of program data 1450 .
- a communication interface 1500 is an interface for connecting the computer 1000 to an external network 1550 (for example, the Internet).
- CPU 1100 receives data from another device via communication interface 1500, and transmits data generated by CPU 1100 to another device.
- the input/output interface 1600 is an interface for connecting the input/output device 1650 and the computer 1000 .
- the CPU 1100 receives data from input devices such as a keyboard and mouse via the input/output interface 1600 .
- the CPU 1100 also transmits data to an output device such as a display, speaker, or printer via the input/output interface 1600 .
- the input/output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium.
- Media include, for example, optical recording media such as DVD (Digital Versatile Disc) and PD (Phase change rewritable disk), magneto-optical recording media such as MO (Magneto-Optical disk), tape media, magnetic recording media, semiconductor memories, etc. is.
- the CPU 1100 of the computer 1000 implements the functions of the control unit 130 and the like by executing the information processing program loaded on the RAM 1200.
- the HDD 1400 also stores an information processing program according to the present disclosure and data in the storage unit 120 .
- CPU 1100 reads and executes program data 1450 from HDD 1400 , as another example, these programs may be obtained from another device via external network 1550 .
- a plurality of participants participating in a remote meeting are divided into a plurality of groups, each group can have a conversation, and an estimation unit that estimates topics of conversation in the group from statements of participants belonging to the group;
- the topic of the second group which is a group other than the first group, to which one participant belongs, and the preference of the one participant satisfy the condition regarding similarity, the one participant is asked to give the first topic to the participant.
- the estimation unit Estimate the state of participation of the one participant in the first group, The notification unit The information processing apparatus according to (1), wherein the notification response regarding the second group is changed according to the participation state of the one participant in the first group estimated by the estimation unit.
- the estimation unit estimating whether the state of the one participant is a first state, which is a state of actively participating, or a second state, which is a state other than the first state;
- the notification unit changing the notification mode of the second group to the one participant according to whether the one participant is in the first state or the second state, the information processing according to (2) Device.
- the notification unit changing the notification timing of the second group to the one participant depending on whether the one participant is in the first state or the second state; the information processing according to (3).
- Device (5) The notification unit (3) or (4), wherein the display mode of the second group for the one participant is changed depending on whether the one participant is in the first state or the second state information processing equipment.
- the notification unit The information processing apparatus according to any one of (3) to (5), wherein when the one participant is in the second state, the one participant is notified of the second group.
- the notification unit Any one of (3) to (6) of notifying the one participant of the second group at the timing when the participation state of the one participant transitions from the first state to the second state.
- the information processing device according to .
- the notification unit The information processing apparatus according to any one of (1) to (7), wherein voices of the participants of the second group are superimposed on voices of the participants of the first group.
- the notification unit The information processing apparatus according to (8), wherein the voices of the participants of the second group are superimposed at a second volume that is lower than the first volume of the first group.
- the notification unit The information processing apparatus according to any one of (1) to (9), wherein the images of the participants of the first group and the images of the participants of the second group are displayed. (11) The notification unit The information processing apparatus according to (10), wherein the first group and the second group are displayed so as to be identifiable. (12) The notification unit The information processing apparatus according to (10) or (11), wherein the image of the participant of the second group is displayed in a second area smaller than the first area displaying the image of the participant of the first group. (13) The notification unit Any one of (1) to (12), when notifying the voices and images of the participants of the second group in a manner that allows the correspondence between the voices and images of the participants of the second group to be recognized. The information processing device according to .
- the notification unit (13) The information processing apparatus according to (13), wherein the notification is made in such a manner that the voices of the participants in the second group are output from a direction corresponding to the relationship between the display positions of the first group and the second group.
- the notification unit When the group keyword, which is a keyword indicating the topic of the second group, and the participant keyword, which is a keyword indicating the preference of the one participant, satisfy the condition regarding the similarity, the one participant is the first participant.
- the information processing apparatus according to any one of (1) to (14), which notifies two groups.
- the notification unit (15) if the comparison result between the group keyword of the second group and the participant keyword of the one participant satisfies the condition regarding the similarity, the one participant is notified of the second group; The information processing device according to . (17) The notification unit If the degree of similarity between the group keyword of the second group and the participant keyword of the one participant satisfies the condition regarding the similarity, the one participant is notified of the second group (15). Or the information processing device according to (16).
- a plurality of participants participating in a remote meeting are divided into a plurality of groups, can have a conversation within each group, and the topic of the conversation in the group is estimated from the statements of the participants belonging to the group,
- the topic of the second group which is a group other than the first group, to which one participant belongs, and the preference of the one participant satisfy the condition regarding similarity, the one participant is asked to give the first topic to the participant. 2 notify the group, An information processing method that determines processing.
- remote meeting system 100 remote meeting server (information processing device) 110 communication unit 120 storage unit 121 user information storage unit 122 group information storage unit 130 control unit 131 acquisition unit 132 data storage unit 133 group management unit 134 topic matching unit 135 output generation unit 136 notification unit 10 terminal device (information processing device) 11 Communication Unit 12 Audio Input Unit 13 Audio Output Unit 14 Camera 15 Display (Notification Unit) 16 Operation Unit 17 Storage Unit 18 Control Unit 181 Acquisition Unit 182 Voice Recognition Unit 183 Preference Estimation Unit 184 Processing Unit
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Data Mining & Analysis (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
1.実施形態
1-1.本開示の実施形態に係る情報処理の概要
1-1-1.お薦めグループの通知有りの表示例
1-1-2.お薦めグループの通知無しの表示例
1-1-3.背景及び効果等
1-1-4.ユーザの参加状態
1-1-5.提示するグループ
1-1-6.通知態様
1-1-7.グループ移動等
1-2.実施形態に係るリモート会合システムの構成
1-3.実施形態に係る情報処理装置の構成
1-4.実施形態に係る端末装置の構成
1-5.実施形態に係る情報処理の手順
1-5-1.情報処理装置に係る処理の手順
1-5-2.端末装置に係る処理の手順
1-5-3.リモート会合サーバに係る処理の手順
1-5-4.マッチング処理の手順
1-6.出力態様例(パーティ会場)
2.その他の実施形態
2-1.その他の構成例
2-2.その他
3.本開示に係る効果
4.ハードウェア構成
[1-1.本開示の実施形態に係る情報処理の概要]
図1は、本開示の実施形態に係る情報処理の一例を示す図である。本開示の実施形態に係る情報処理は、リモート会合サーバ100や複数の端末装置10を含むリモート会合システム1によって実現される。図1では各端末装置10を区別して説明するために、各端末装置10を、端末装置10a、端末装置10b及び端末装置10cとして説明する場合がある。なお、端末装置10a、端末装置10b及び端末装置10c等について、特に区別せずに説明する場合は、「端末装置10」と記載する。また、図1では、3つの端末装置10を図示するが、端末装置10は4つ以上あってもよい。
まず、図2を用いて、お薦めグループの通知有りの表示例について説明する。図2は、傍観者への表示の一例を示す図である。図2では、図1における傍観者であるユーザU1の端末装置10aにおける表示を一例として示す。
次に、図4を用いて、お薦めグループの通知無しの表示例について説明する。図4は、当事者への表示の一例を示す図である。図4では、図1における当事者であるユーザU2の端末装置10における表示を一例として示す。なお、図4において、図2と同様の点については同様の符号を付すなどにより適宜説明を省略する。
従来のリモート会合では、同時に複数人が発言するのは難しく、一人が発話中は他の参加者は聞き役になり、参加人数が多数の場合は発話が盛り上がらないという問題があった。また、参加者をいくつかのグループに分けて運用することが考えられるが、その場合は他のグループでどのような会話が行われているかがわからないという問題もあった。このような問題は、ブレストやパーティのように、事前に議事内容や進行が決まっていないリモートでの会合で顕著である。
ここから、各種の情報及び処理等についての例を説明する。まず、ユーザの参加状態について説明する。リモート会合に参加しているユーザの参加状態は、積極的に発言・やり取りを行っている当事者として参加している状態(第1状態)と、他の人が話しているのを聞いている傍観者として参加している状態(第2状態)の2つがある。1人の参加者は、この2つの状態を遷移しながら会合に参加している。
次に、ユーザに提示(通知)するグループの選択(決定)等についての例を説明する。リモート会合システム1は、各グループで会話されている内容と、各ユーザの嗜好推定結果のマッチングによって、各ユーザへのお薦めするグループを決定する。
次に、通知態様についての例を説明する。まず、お薦めグループの提示(通知)方法について説明する。
次に、グループ移動等についての例を説明する。リモート会合システム1では、お薦めのグループが通知(提示)されたときに、グループを移動するかどうかは基本的にユーザの判断にゆだねられる。例えば、お薦めグループの表示の横に移動ボタンを用意し、これを押すことによってグループを移動できるようにすることが考えられる。
図5に示すリモート会合システム1について説明する。図5に示すように、リモート会合システム1は、リモート会合サーバ100と、複数の端末装置10とが含まれる。リモート会合サーバ100と、複数の端末装置10の各々とは所定の通信網(ネットワークN)を介して、有線または無線により通信可能に接続される。図5は、実施形態に係るリモート会合システムの構成例を示す図である。なお、図5では3個の端末装置10のみを図示するが、リモート会合システム1には、リモート会合に参加するユーザの数以上の数の端末装置10が含まれる。また、図5に示したリモート会合システム1には、複数のリモート会合サーバ100が含まれてもよい。
次に、実施形態に係る情報処理を実行する情報処理装置の一例であるリモート会合サーバ100の構成について説明する。図6は、本開示の実施形態に係るリモート会合サーバの構成例を示す図である。
次に、実施形態に係る情報処理を実行する情報処理装置の一例である端末装置10の構成について説明する。図8は、本開示の実施形態に係る端末装置の構成例を示す図である。
次に、図9~図12を用いて、実施形態に係る各種情報処理の手順について説明する。
まず、図9を用いて、情報処理装置に係る処理の流れについて説明する。図9は、本開示の実施形態に係る情報処理装置の処理手順を示すフローチャートである。具体的には、図9は、情報処理装置の一例であるリモート会合サーバ100による情報処理の手順を示すフローチャートである。
次に、図10を用いて、端末装置10側での処理について説明する。図10は、本開示の実施形態に係る端末装置における処理手順を示すフローチャートである。図10は、端末装置10がリモート会合に関するサービスを提供する場合の処理の一例を示すフローチャートである。
次に、図11を用いて、リモート会合サーバ100側での処理について説明する。図11は、本開示の実施形態に係るリモート会合サーバの処理手順を示すフローチャートである。図11は、リモート会合サーバ100がリモート会合に関するサービスを提供する場合の処理の一例を示すフローチャートである。
次に、図12を用いて、リモート会合システム1におけるマッチング処理について説明する。図12は、マッチング処理手順を示すフローチャートである。図12に示すマッチング処理は、例えば図10のステップS202~S204及び図11のステップS301~S303に対応する。なお、以下では、リモート会合システム1が処理を行う場合を一例として説明するが、図12に示す処理は、リモート会合システム1に含まれる装置構成に応じて、リモート会合サーバ100及び端末装置10等のいずれの装置が行ってもよい。
お薦めグループの通知等のリモート会合の出力態様については、図2~図4に示す例に限らず、様々な出力態様であってもよい。例えば、各グループや各ユーザの仮想的な位置を反映した出力態様であってもよい。この点について、図13に示すパーティ会場風の表示を用いて、表示や音声出力の出力態様の一例を説明する。なお、図13では表示や音声出力の出力態様の説明であるため、その前提となるマッチング処理等のお薦めグループを決定する処理についての説明は省略する。また、図13において、図2~図4と同様の点については適宜説明を省略する。図13は、表示態様の一例を示す図である。
上述した各実施形態に係る処理は、上記各実施形態や変形例以外にも種々の異なる形態(変形例)にて実施されてよい。
上述した例のリモート会合システム1では、端末装置10が音声認識及び嗜好推定を行い、リモート会合サーバ100がグループの話題推定やマッチング処理を行う場合を示したが、リモート会合システム1における機能の分割は任意の態様が採用可能である。
また、上記各実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した処理手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られない。
上述のように、本開示に係る情報処理装置(例えば、実施形態ではリモート会合サーバ100または端末装置10)は、推定部(実施形態ではグループ管理部133)と、通知部(実施形態では通知部136)とを備える。推定部は、リモートでの会合に参加する複数の参加者が複数のグループに分かれ、各グループ内で会話でき、グループに属する参加者の発言からグループでの会話の話題を推定する。通知部は、一の参加者が属するグループである第1グループ以外のグループである第2グループの話題と、一の参加者の嗜好とが類似性に関する条件を満たす場合、一の参加者に第2グループを通知する。
上述してきた各実施形態に係るリモート会合サーバ100や端末装置10等の情報処理装置(情報機器)は、例えば図14に示すような構成のコンピュータ1000によって実現される。図14は、情報処理装置の機能を実現するコンピュータ1000の一例を示すハードウェア構成図である。以下、実施形態に係るリモート会合サーバ100を例に挙げて説明する。コンピュータ1000は、CPU1100、RAM1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インターフェイス1500、及び入出力インターフェイス1600を有する。コンピュータ1000の各部は、バス1050によって接続される。
(1)
リモートでの会合に参加する複数の参加者が複数のグループに分かれ、各グループ内で会話でき、グループに属する参加者の発言から前記グループでの会話の話題を推定する推定部と、
一の参加者が属するグループである第1グループ以外のグループである第2グループの前記話題と、前記一の参加者の嗜好とが類似性に関する条件を満たす場合、前記一の参加者に前記第2グループを通知する通知部と、
を備える情報処理装置。
(2)
前記推定部は、
前記一の参加者の前記第1グループへの参加状態を推定し、
前記通知部は、
前記推定部により推定された前記一の参加者の前記第1グループへの参加状態に応じて、前記第2グループに関する通知対応を変化させる
(1)に記載の情報処理装置。
(3)
前記推定部は、
前記一の参加者の状態が、積極的に参加している状態である第1状態と、前記第1状態以外の状態である第2状態とのいずれであるかを推定し、
前記通知部は、
前記一の参加者が前記第1状態と前記第2状態とのいずれであるかに応じて、前記一の参加者への前記第2グループの通知態様を変化させる
(2)に記載の情報処理装置。
(4)
前記通知部は、
前記一の参加者が前記第1状態と前記第2状態とのいずれであるかに応じて、前記一の参加者への前記第2グループの通知タイミングを変化させる
(3)に記載の情報処理装置。
(5)
前記通知部は、
前記一の参加者が前記第1状態と前記第2状態とのいずれであるかに応じて、前記一の参加者に対する前記第2グループの表示態様を変化させる
(3)または(4)に記載の情報処理装置。
(6)
前記通知部は、
前記一の参加者が前記第2状態である場合、前記一の参加者への前記第2グループを通知する
(3)~(5)のいずれか1つに記載の情報処理装置。
(7)
前記通知部は、
前記一の参加者の参加状態が前記第1状態から前記第2状態へ遷移したタイミングで、前記一の参加者への前記第2グループを通知する
(3)~(6)のいずれか1つに記載の情報処理装置。
(8)
前記通知部は、
前記第1グループの参加者の音声に、前記第2グループの参加者の音声を重畳させる
(1)~(7)のいずれか1つに記載の情報処理装置。
(9)
前記通知部は、
前記第1グループの第1音量よりも小さい第2音量で前記第2グループの参加者の音声を重畳させる
(8)に記載の情報処理装置。
(10)
前記通知部は、
前記第1グループの参加者の画像とともに、前記第2グループの参加者の画像を表示させる
(1)~(9)のいずれか1つに記載の情報処理装置。
(11)
前記通知部は、
前記第1グループと前記第2グループとを識別可能に表示させる
(10)に記載の情報処理装置。
(12)
前記通知部は、
前記第1グループの参加者の画像を表示する第1領域よりも小さい第2領域に前記第2グループの参加者の画像を表示させる
(10)または(11)に記載の情報処理装置。
(13)
前記通知部は、
前記第2グループの参加者の音声及び画像を通知する場合、前記第2グループの参加者の音声及び画像の対応関係を認知可能な態様で通知する
(1)~(12)のいずれか1つに記載の情報処理装置。
(14)
前記通知部は、
前記第1グループと前記第2グループの表示位置の関係に対応する方向から前記第2グループの参加者の音声が出力される態様で通知する
(13)に記載の情報処理装置。
(15)
前記通知部は、
前記第2グループの前記話題を示すキーワードであるグループキーワードと、前記一の参加者の嗜好を示すキーワードである参加者キーワードとが前記類似性に関する条件を満たす場合、前記一の参加者に前記第2グループを通知する
(1)~(14)のいずれか1つに記載の情報処理装置。
(16)
前記通知部は、
前記第2グループの前記グループキーワードと、前記一の参加者の前記参加者キーワードとの比較結果が前記類似性に関する条件を満たす場合、前記一の参加者に前記第2グループを通知する
(15)に記載の情報処理装置。
(17)
前記通知部は、
前記第2グループの前記グループキーワードと、前記一の参加者の前記参加者キーワードとの類似度が前記類似性に関する条件を満たす場合、前記一の参加者に前記第2グループを通知する
(15)または(16)に記載の情報処理装置。
(18)
リモートでの会合に参加する複数の参加者が複数のグループに分かれ、各グループ内で会話でき、グループに属する参加者の発言から前記グループでの会話の話題を推定し、
一の参加者が属するグループである第1グループ以外のグループである第2グループの前記話題と、前記一の参加者の嗜好とが類似性に関する条件を満たす場合、前記一の参加者に前記第2グループを通知する、
処理を決定する情報処理方法。
100 リモート会合サーバ(情報処理装置)
110 通信部
120 記憶部
121 ユーザ情報記憶部
122 グループ情報記憶部
130 制御部
131 取得部
132 データ保持部
133 グループ管理部
134 話題マッチング部
135 出力生成部
136 通知部
10 端末装置(情報処理装置)
11 通信部
12 音声入力部
13 音声出力部
14 カメラ
15 ディスプレイ(通知部)
16 操作部
17 記憶部
18 制御部
181 取得部
182 音声認識部
183 嗜好推定部
184 処理部
Claims (18)
- リモートでの会合に参加する複数の参加者が複数のグループに分かれ、各グループ内で会話でき、グループに属する参加者の発言から前記グループでの会話の話題を推定する推定部と、
一の参加者が属するグループである第1グループ以外のグループである第2グループの前記話題と、前記一の参加者の嗜好とが類似性に関する条件を満たす場合、前記一の参加者に前記第2グループを通知する通知部と、
を備える情報処理装置。 - 前記推定部は、
前記一の参加者の前記第1グループへの参加状態を推定し、
前記通知部は、
前記推定部により推定された前記一の参加者の前記第1グループへの参加状態に応じて、前記第2グループに関する通知対応を変化させる
請求項1に記載の情報処理装置。 - 前記推定部は、
前記一の参加者の状態が、積極的に参加している状態である第1状態と、前記第1状態以外の状態である第2状態とのいずれであるかを推定し、
前記通知部は、
前記一の参加者が前記第1状態と前記第2状態とのいずれであるかに応じて、前記一の参加者への前記第2グループの通知態様を変化させる
請求項2に記載の情報処理装置。 - 前記通知部は、
前記一の参加者が前記第1状態と前記第2状態とのいずれであるかに応じて、前記一の参加者への前記第2グループの通知タイミングを変化させる
請求項3に記載の情報処理装置。 - 前記通知部は、
前記一の参加者が前記第1状態と前記第2状態とのいずれであるかに応じて、前記一の参加者に対する前記第2グループの表示態様を変化させる
請求項3に記載の情報処理装置。 - 前記通知部は、
前記一の参加者が前記第2状態である場合、前記一の参加者への前記第2グループを通知する
請求項3に記載の情報処理装置。 - 前記通知部は、
前記一の参加者の参加状態が前記第1状態から前記第2状態へ遷移したタイミングで、前記一の参加者への前記第2グループを通知する
請求項3に記載の情報処理装置。 - 前記通知部は、
前記第1グループの参加者の音声に、前記第2グループの参加者の音声を重畳させる
請求項1に記載の情報処理装置。 - 前記通知部は、
前記第1グループの第1音量よりも小さい第2音量で前記第2グループの参加者の音声を重畳させる
請求項8に記載の情報処理装置。 - 前記通知部は、
前記第1グループの参加者の画像とともに、前記第2グループの参加者の画像を表示させる
請求項1に記載の情報処理装置。 - 前記通知部は、
前記第1グループと前記第2グループとを識別可能に表示させる
請求項10に記載の情報処理装置。 - 前記通知部は、
前記第1グループの参加者の画像を表示する第1領域よりも小さい第2領域に前記第2グループの参加者の画像を表示させる
請求項10に記載の情報処理装置。 - 前記通知部は、
前記第2グループの参加者の音声及び画像を通知する場合、前記第2グループの参加者の音声及び画像の対応関係を認知可能な態様で通知する
請求項1に記載の情報処理装置。 - 前記通知部は、
前記第1グループと前記第2グループの表示位置の関係に対応する方向から前記第2グループの参加者の音声が出力される態様で通知する
請求項13に記載の情報処理装置。 - 前記通知部は、
前記第2グループの前記話題を示すキーワードであるグループキーワードと、前記一の参加者の嗜好を示すキーワードである参加者キーワードとが前記類似性に関する条件を満たす場合、前記一の参加者に前記第2グループを通知する
請求項1に記載の情報処理装置。 - 前記通知部は、
前記第2グループの前記グループキーワードと、前記一の参加者の前記参加者キーワードとの比較結果が前記類似性に関する条件を満たす場合、前記一の参加者に前記第2グループを通知する
請求項15に記載の情報処理装置。 - 前記通知部は、
前記第2グループの前記グループキーワードと、前記一の参加者の前記参加者キーワードとの類似度が前記類似性に関する条件を満たす場合、前記一の参加者に前記第2グループを通知する
請求項15に記載の情報処理装置。 - リモートでの会合に参加する複数の参加者が複数のグループに分かれ、各グループ内で会話でき、グループに属する参加者の発言から前記グループでの会話の話題を推定し、
一の参加者が属するグループである第1グループ以外のグループである第2グループの前記話題と、前記一の参加者の嗜好とが類似性に関する条件を満たす場合、前記一の参加者に前記第2グループを通知する、
処理を決定する情報処理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22784346.3A EP4322090A1 (en) | 2021-04-06 | 2022-02-16 | Information processing device and information processing method |
JP2023512848A JPWO2022215361A1 (ja) | 2021-04-06 | 2022-02-16 | |
US18/551,621 US20240171418A1 (en) | 2021-04-06 | 2022-02-16 | Information processing device and information processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-064955 | 2021-04-06 | ||
JP2021064955 | 2021-04-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022215361A1 true WO2022215361A1 (ja) | 2022-10-13 |
Family
ID=83546327
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/006165 WO2022215361A1 (ja) | 2021-04-06 | 2022-02-16 | 情報処理装置及び情報処理方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240171418A1 (ja) |
EP (1) | EP4322090A1 (ja) |
JP (1) | JPWO2022215361A1 (ja) |
WO (1) | WO2022215361A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024084843A1 (ja) * | 2022-10-19 | 2024-04-25 | 株式会社Nttドコモ | 仮想空間管理装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014137706A (ja) * | 2013-01-16 | 2014-07-28 | Dainippon Printing Co Ltd | サーバ装置、プログラム及び通信システム |
US20180322122A1 (en) * | 2017-05-03 | 2018-11-08 | Facebook, Inc. | Recommendations for online system groups |
US10469275B1 (en) * | 2016-06-28 | 2019-11-05 | Amazon Technologies, Inc. | Clustering of discussion group participants |
-
2022
- 2022-02-16 EP EP22784346.3A patent/EP4322090A1/en active Pending
- 2022-02-16 JP JP2023512848A patent/JPWO2022215361A1/ja active Pending
- 2022-02-16 WO PCT/JP2022/006165 patent/WO2022215361A1/ja active Application Filing
- 2022-02-16 US US18/551,621 patent/US20240171418A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014137706A (ja) * | 2013-01-16 | 2014-07-28 | Dainippon Printing Co Ltd | サーバ装置、プログラム及び通信システム |
US10469275B1 (en) * | 2016-06-28 | 2019-11-05 | Amazon Technologies, Inc. | Clustering of discussion group participants |
US20180322122A1 (en) * | 2017-05-03 | 2018-11-08 | Facebook, Inc. | Recommendations for online system groups |
Non-Patent Citations (2)
Title |
---|
ROCKY6959: "Participants can now freely move around breakout rooms", ZOOM-LES.CMC.OSAKA-U.AC.JP, ZOOM-LES.CMC.OSAKA-U.AC.JP, JP, 22 September 2020 (2020-09-22), JP, pages 1 - 1, XP055976665, Retrieved from the Internet <URL:https://zoom.les.cmc.osaka-u.ac.jp/2020/09/22/%E5%8F%82%E5%8A%A0%E8%80%85%E3%81%8C%E3%83%96%E3%83%AC%E3%82%A4%E3%82%AF%E3%82%A2%E3%82%A6%E3%83%88%E3%83%AB%E3%83%BC%E3%83%A0%E3%82%92%E8%87%AA%E7%94%B1%E3%81%AB%E7%A7%BB%E5%8B%95%E3%81%A7%E3%81%8D%EF%BC%9E> [retrieved on 20221101] * |
USE BREAKOUT ROOMS IN TEAMS MEETINGS, 1 April 2021 (2021-04-01), Retrieved from the Internet <URL:https://web.archive.org/web/20210122112205/https://support.microsoft.com/en-us/office/use-breakout-rooms-in-teams-meetings-7delf48a-da07-466c-a5ab-4ebace28e461> |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024084843A1 (ja) * | 2022-10-19 | 2024-04-25 | 株式会社Nttドコモ | 仮想空間管理装置 |
Also Published As
Publication number | Publication date |
---|---|
EP4322090A1 (en) | 2024-02-14 |
JPWO2022215361A1 (ja) | 2022-10-13 |
US20240171418A1 (en) | 2024-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12028302B2 (en) | Assistance during audio and video calls | |
KR102089487B1 (ko) | 디지털 어시스턴트 서비스의 원거리 확장 | |
US11575531B2 (en) | Dynamic virtual environment | |
CN110730952B (zh) | 处理网络上的音频通信的方法和*** | |
US8791977B2 (en) | Method and system for presenting metadata during a videoconference | |
KR20200039030A (ko) | 디지털 어시스턴트 서비스의 원거리 확장 | |
US20130211826A1 (en) | Audio Signals as Buffered Streams of Audio Signals and Metadata | |
WO2020123177A1 (en) | Dynamic curation of sequence events for communication sessions | |
US20220224735A1 (en) | Information processing apparatus, non-transitory computer readable medium storing program, and method | |
JP2015517709A (ja) | コンテキストに基づくメディアを適応配信するシステム | |
JP2002522998A (ja) | インターネットおよびイントラネットを含むローカルおよびグローバルネットワークによるオーディオ会議用のコンピューター・アーキテクチャーおよびプロセス | |
KR20220123576A (ko) | 3차원(3d) 환경에 대한 통합된 입/출력 | |
WO2022215361A1 (ja) | 情報処理装置及び情報処理方法 | |
CN112528052A (zh) | 多媒体内容输出方法、装置、电子设备和存储介质 | |
JP7152453B2 (ja) | 情報処理装置、情報処理方法、情報処理プログラム及び情報処理システム | |
WO2019026395A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP2023051211A (ja) | 情報処理装置、情報処理方法および情報処理プログラム | |
JP7344612B1 (ja) | プログラム、会話要約装置、および会話要約方法 | |
WO2024032111A1 (zh) | 在线会议的数据处理方法、装置、设备、介质及产品 | |
US11935557B2 (en) | Techniques for detecting and processing domain-specific terminology | |
KR102509106B1 (ko) | 발화 영상 제공 방법 및 이를 수행하기 위한 컴퓨팅 장치 | |
WO2022224584A1 (ja) | 情報処理装置、情報処理方法、端末装置及び表示方法 | |
US20240046951A1 (en) | Speech image providing method and computing device for performing the same | |
JP7293863B2 (ja) | 音声処理装置、音声処理方法およびプログラム | |
KR100596001B1 (ko) | 사용자 단말기에 소정의 콘텐츠를 제공하기 위한 방법 및그 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22784346 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023512848 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18551621 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022784346 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022784346 Country of ref document: EP Effective date: 20231106 |