WO2016067765A1 - 情報処理装置、情報処理方法およびコンピュータプログラム - Google Patents
情報処理装置、情報処理方法およびコンピュータプログラム Download PDFInfo
- Publication number
- WO2016067765A1 WO2016067765A1 PCT/JP2015/075629 JP2015075629W WO2016067765A1 WO 2016067765 A1 WO2016067765 A1 WO 2016067765A1 JP 2015075629 W JP2015075629 W JP 2015075629W WO 2016067765 A1 WO2016067765 A1 WO 2016067765A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- information processing
- processing apparatus
- information
- context information
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 204
- 238000004590 computer program Methods 0.000 title claims description 13
- 238000003672 processing method Methods 0.000 title claims description 10
- 238000012545 processing Methods 0.000 claims description 46
- 230000005540 biological transmission Effects 0.000 claims description 16
- 239000000284 extract Substances 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 6
- 238000000034 method Methods 0.000 description 34
- 230000008569 process Effects 0.000 description 24
- 238000004891 communication Methods 0.000 description 17
- 238000001514 detection method Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 101150012579 ADSL gene Proteins 0.000 description 2
- 102100020775 Adenylosuccinate lyase Human genes 0.000 description 2
- 108700040193 Adenylosuccinate lyases Proteins 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 229920001690 polydopamine Polymers 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000035900 sweating Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/029—Location-based management or tracking services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/0816—Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/148—Migration or transfer of sessions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/30—Profiles
- H04L67/306—User profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/54—Presence management, e.g. monitoring or registration for receipt of user log-on information, or the connection status of the users
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
Definitions
- the present disclosure relates to an information processing apparatus, an information processing method, and a computer program.
- Patent Document 1 proposes a technology that provides support over a plurality of networks that use different radio access technologies.
- an information processing apparatus that detects at least one of the context information that has been exchanged with the user until the detection of the user's departure from the information processing apparatus toward the other apparatus.
- An information processing apparatus is provided that includes a control unit that controls to transmit the unit.
- the information processing device detects proximity of the user to the information processing device, and at least a part of the context information that has been exchanged with the user from other devices until then.
- An information processing apparatus including a control unit that controls reception is provided.
- detection of a user leaving the information processing apparatus is detected, and control is performed so that at least a part of the context information that has been exchanged with the user is transmitted to another apparatus.
- An information processing method including the above is provided.
- detection of a user leaving the information processing apparatus is detected, and control is performed so that at least a part of the context information that has been exchanged with the user is transmitted to another apparatus.
- a computer program is provided that causes a computer to execute.
- the computer detects the proximity of the user to the information processing device and controls to receive at least a part of the context information that has been exchanged with the user from other devices.
- a computer program is provided for execution.
- FIG. 11 is an explanatory diagram showing an outline of operations of the information processing apparatus 100 and the information processing apparatus 200. It is a flowchart which shows an operation example. It is a flowchart which shows an operation example. It is a flowchart which shows an operation example. It is a flowchart which shows an operation example. It is a flowchart which shows an operation example. It is a block diagram which shows the hardware structural example.
- IoT Internet Of Things
- IoT devices devices that can be connected to the Internet and can exchange information may be collectively referred to as “IoT devices”.
- the user has a plurality of IoT devices and can send instructions to the IoT device while switching the IoT devices to be operated, and can listen to and listen to contents acquired by the IoT device.
- This input / output information between the user and the IoT device is hereinafter referred to as “context information”.
- a certain IoT device has a voice recognition function, and the user inputs a command to the IoT device by speaking to the IoT device, and the IoT device executes a predetermined process based on the input command. Can be output. If the IoT device is the same, when the user utters an instruction word such as “it” or “sakino”, the IoT device analyzes the content uttered by the user, and displays the context information with the user so far. Based on this, it is possible to determine what these directives specifically refer to and execute appropriate processing.
- the IoT device can be recognized as the user said “it” or “Sakino” by using the voice recognition function, but what the instructions specifically refer to Since it cannot be determined that the information is not based on the context information with the IoT device before switching, appropriate processing cannot be executed.
- the above-mentioned point is solved.
- the present disclosure has intensively studied a technology that enables context information to be efficiently linked to a plurality of IoT devices.
- the present disclosure person has devised a technique that can detect a user's behavior and efficiently link context information to a plurality of IoT devices based on the user's behavior.
- FIG. 1 is an explanatory diagram illustrating an overall configuration example of an information processing system 1 according to an embodiment of the present disclosure.
- context information is linked between the information processing apparatuses 100 and 200 that are IoT devices so that the information processing apparatus 200 can take over the processing performed by the user with the information processing apparatus 100.
- An example of the overall configuration of the information processing system 1 is shown.
- the information processing system 1 includes information processing apparatuses 100 and 200, a sensor 300, and input / output devices 400a and 400b.
- the information processing apparatuses 100 and 200 receive an operation input from the user and execute various information processes according to the operation input.
- the information processing apparatus 100 in the information processing system 1 includes a control unit 110, a detection unit 120, and a context information management unit 130. .
- the control unit 110 is a block that controls the operation of the information processing apparatus 100, and may be configured by a CPU, a ROM, a RAM, and the like, for example.
- the control unit 110 sends an instruction to the context information management unit 130 based on an instruction from the detection unit 120, and selects a content output destination from the input / output devices 400a and 400b. Execute the process.
- the processing executed by the control unit 110 is not limited to this example.
- the detecting unit 120 detects the state of the user who uses the information processing apparatus 100 that is an IoT device. As shown in FIG. 1, the detection unit 120 includes a recognition unit 122.
- the recognition unit 122 acquires sensing data from the sensor 300 that senses the situation of the user who uses the information processing apparatus 100, and detects the state of the user who uses the information processing apparatus 100 based on the sensing data acquired from the sensor 300. .
- the sensor 300 will be described later.
- sensors constituting the sensor 300 for example, a GPS (Global Positioning System), a GLONASS (Global Navigation Satellite System), a BDS (BeiDou Navigation Satellite System), a current sensor acquiring position, and the like.
- a camera with depth information, a human sensor, a microphone, and the like may be included.
- the sensor 300 detects acceleration, angular velocity, azimuth, illuminance, temperature, atmospheric pressure and the like applied to the apparatus.
- the various sensors described above can detect various information as information related to the user, for example, information indicating the user's movement and orientation.
- the sensor 300 may include a sensor that detects biological information of the user such as a pulse, sweating, brain waves, touch, smell, and taste.
- the recognition unit 122 detects the state of the user who uses the information processing apparatus 100 based on the sensing data acquired from the sensor 300, and performs a predetermined notification to the control unit 110 based on the detection result.
- the notification from the recognition unit 122 to the control unit 110 is, for example, that the user may try to use the information processing apparatus 100 by moving the position of the user.
- the control unit 110 instructs the context information management unit 130 to acquire context information.
- the context information management unit 130 manages context information.
- the context information management unit 130 holds the content of context information that is the content of interaction with the IoT device.
- the context information management unit 130 executes a process of acquiring context information from another IoT device based on an instruction from the control unit 110.
- the dialogue content to the IoT device includes voice, text, image, biometric data, and other input from the user to the IoT device.
- Information included in the context information includes, for example, information that identifies the user who is operating, information that identifies the application that the user is using, or content that the user is browsing, the execution status of the application, Information specifying a browsing location or the like may be included.
- the information included in the context information is not limited to this example.
- the context information management unit 130 stores the context information in association with information that uniquely identifies the user (for example, information such as a user ID).
- the context information management unit 130 stores context information in association with information that uniquely identifies a user, so that processing such as specifying the context information holding source and acquiring context information corresponding to the context information holding source is performed. It becomes possible.
- Table 1 is an example of context information managed by the context information management unit 130.
- Table 1 shows an example of context information when context information is managed in units of user IDs.
- the information processing apparatus 200 may have a configuration similar to that of the information processing apparatus 100.
- the information processing apparatus 100 determines the status of the user who uses the information processing apparatus 100 using the sensing data obtained by sensing by the sensor 300.
- the user status determined by the information processing apparatus 100 using the sensing data obtained by sensing by the sensor 300 may include, for example, the user's moving direction, moving speed, and the like.
- the sensor 300 is provided to detect that the user is likely to use the information processing apparatus 100 next when the user approaches the information processing apparatus 100. In addition, the sensor 300 is provided to detect that the possibility that the user will not use the information processing apparatus 100 increases when the user leaves the information processing apparatus 100.
- the senor 300 be installed at a position and orientation in which the corresponding information processing apparatus 100 can determine the proximity or separation of the user.
- the sensor 300 is preferably provided in the vicinity of the information processing apparatus 100, but may be provided on the movement path of the user to the information processing apparatus 100.
- the senor 300 can be provided not only to detect the proximity of a person but also to recognize who is in proximity. Therefore, as described above, the sensor 300 preferably includes a device that can acquire information for identifying a person, such as a camera or a microphone. Therefore, for example, if information such as a human figure, a human voice, and vibration is obtained by sensing by the sensor 300, the information processing apparatus 100 can discriminate nearby humans using the sensing data. Of course, if there is only one user using the IoT device, the sensor 300 may have only a function of detecting the proximity of a person.
- the information processing apparatus 100 can detect the proximity of the user. Then, the information processing apparatus 100 detects the proximity of the user by acquiring sensing data from the sensor 300, acquires context information from another IoT device (for example, the information processing apparatus 200), and continues the dialogue with the user. Can do.
- FIG. 1 shows information processing apparatuses 100 and 200 of two IoT devices. For example, when the user is interacting with the information processing apparatus 200 and the sensor 300 detects that the user is heading toward the information processing apparatus 100, the information processing apparatus 100 receives the context information from the information processing apparatus 200. Acquire.
- a case where an area in charge of a certain IoT device is in contact with or a part of the area in charge of another IoT device is expressed as “adjacent”, and a case where it is not in contact is expressed as “proximity”
- proximity For example, when a user exists in a place where the areas covered by a plurality of IoT devices overlap, it is necessary to consider which IoT device should pick up audio or which IoT device should be responsible for output. It is better to consider.
- the assigned areas overlap it may be determined which IoT device executes the processing by, for example, image recognition using an image captured by a camera or voice recognition collected by a microphone. For example, an IoT device that has been determined that the user is facing the front as a result of image recognition or voice recognition may execute processing.
- FIG. 2 is an explanatory diagram showing an outline of operations of the information processing apparatus 100 and the information processing apparatus 200. For example, it is assumed that a user is interacting with the information processing apparatus 200 and the user is moving to a place where the information processing apparatus 100 is placed while moving. When the sensor 300 detects that the user has approached, the sensor 300 notifies the information processing apparatus 100 of the proximity of the user.
- the information processing apparatus 100 When the information processing apparatus 100 receives the notification from the sensor 300, the information processing apparatus 100 determines that the user is approaching, and acquires context information from the information processing apparatus 200. By acquiring the context information from the information processing apparatus 200, the information processing apparatus 100 can take over the conversation that the user has been performing on the information processing apparatus 200.
- the information processing apparatus 200 illustrated in the example of FIG. 2 is also referred to as “proximity IoT device”, and the information processing apparatus 100 is also referred to as “proximity IoT device”.
- the information processing apparatus 100 and the information processing apparatus 200 are in a range where direct communication is possible, the information processing apparatus 100 can directly acquire the context information from the information processing apparatus 200.
- the information processing apparatus 100 and the information processing apparatus 200 are not in a range where direct communication is possible, the information processing apparatus 100 acquires context information from, for example, the cloud server 10. Therefore, each of the information processing apparatuses 100 and 200 has a function of transmitting context information to the cloud server 10 and receiving it from the cloud server 10.
- all of the held context information may be transmitted from the neighboring IoT device to the neighboring IoT device.
- transmission may take time. Therefore, there is a possibility that the transmission of the context information may not be in time for the start of the dialogue process in the adjacent IoT device. Therefore, only a part of the held context information may be transmitted from the neighboring IoT device to the neighboring IoT device.
- context information transmitted by the neighboring IoT device for example, context information generated most recently in time, or only context that is considered to be optimal based on past context information and current user situations There is information.
- Current user situations include time, place, and the person with whom they are together.
- the current user situation is determined by the proximity IoT device from sensing data obtained by the sensor 300, for example.
- Whether or not the information processing apparatus 100 and the information processing apparatus 200 are in a range in which direct communication is possible may be determined by, for example, detecting the strength of radio waves used by the information processing apparatus 200 for wireless communication. That is, the information processing apparatus 100 does not detect radio waves from the information processing apparatus 200 at all, or the range in which the information processing apparatus 200 can communicate directly with the information processing apparatus 200 when the radio waves are detected but the intensity is less than a predetermined value. It may be judged that there is not.
- radio wave used by the information processing apparatus 200 for wireless communication for example, a radio wave in a frequency band used in Wi-Fi, Bluetooth (registered trademark) or the like can be used, but the radio wave is limited to a specific band or radio wave. It is not a thing.
- the information processing apparatus 100 acquires context information managed by the information processing apparatus 200 directly from the information processing apparatus 200 or via the cloud server 10, so that the user can access the information processing apparatus 200. You can take over the dialogue you have been doing.
- an example in which input / output devices 400 a and 400 b that output content and receive input from a user are provided in the information processing system 1.
- the present disclosure is not limited to such an example, and the information processing apparatus 100 may be configured to output content or accept input from a user.
- the information processing apparatus 100 When the voice or image input from the user is obtained by the plurality of input / output devices 400a and 400b, the information processing apparatus 100 adopts the input having the best quality. In addition, when there are a plurality of input / output devices 400a and 400b as output destination candidates, the information processing apparatus 100 may specify, for example, all the input / output devices 400a and 400b as output destinations and specify the output destination for the user. You may be allowed to ask.
- FIG. 3 is an operation example of the information processing system 1 according to an embodiment of the present disclosure, and is a flowchart illustrating an operation example of a proximity IoT device (for example, the information processing apparatus 200 in FIG. 2).
- a proximity IoT device for example, the information processing apparatus 200 in FIG. 2.
- the neighboring IoT device determines whether there is a request for context information from another IoT device (step S101). If there is a request for context information from another IoT device (step S101, Yes), the neighboring IoT device The context information is transmitted to the IoT device that transmitted the request for the context information (step S102).
- the neighboring IoT device waits until there is a request for context information from another IoT device.
- the proximity IoT device may transmit the context information to the cloud server 10 instead of directly transmitting the context information to the other IoT devices.
- FIG. 4 is an operation example of the information processing system 1 according to an embodiment of the present disclosure, and is a flowchart illustrating an operation example of the approached IoT device (for example, the information processing apparatus 100 in FIG. 2).
- the proximity IoT device in the information processing system 1 according to an embodiment of the present disclosure will be described with reference to FIG.
- the proximity IoT device acquires sensing data from the sensor 300 (step S111), and determines whether the user is approaching based on the sensing data acquired from the sensor 300 (step S112).
- the method is not limited to a specific method. For example, if the acquired sensing data includes an image captured by a camera and the image of the human face grows with time, it can be determined that the human is facing the camera. Further, for example, if the acquired sensing data includes an image captured by the camera and the image is such that the back of the human head becomes smaller with time, it can be determined that the human is coming from the camera.
- a detection target for example, it is not only a video in which a human face grows with time, but also whether the detected face is the detection target user's face, It is possible to determine whether the detection target user is approaching.
- step S112 if it is determined that the user is not approaching (No in step S112), the adjacent IoT device returns to the sensing data acquisition process in step S111.
- step S112 if it is determined that the user is approaching (step S112, Yes), the adjacent IoT device subsequently discovers the context information holding source (step S113), and The context information is acquired from the holding source, and the context information is interpreted (step S114).
- the proximity IoT device may receive context information from the cloud server 10 instead of directly receiving the context information from the proximity IoT device.
- FIG. 5 is an operation example of the information processing system 1 according to an embodiment of the present disclosure, and is a flowchart illustrating an operation example of the approached IoT device (for example, the information processing apparatus 100 in FIG. 2).
- FIG. 5 shows an operation example when the user leaves the proximity IoT device.
- an operation example of the proximity IoT device in the information processing system 1 according to an embodiment of the present disclosure will be described with reference to FIG.
- the proximate IoT device acquires sensing data from the sensor 300 (step S121), and determines whether the user has left the proximate IoT device based on the sensing data acquired from the sensor 300 (step S122).
- step S122 If it is determined as a result of the determination in step S122 that the user has not left (No in step S122), the adjacent IoT device returns to the sensing data acquisition process in step S121.
- step S122 if it is determined that the user has left as a result of the determination in step S122 (step S122, Yes), the adjacent IoT device subsequently saves the context information in the cloud server 10 (step S123).
- each device configuring the information processing system 1 sends context information to each other by approaching or leaving the user, or stores the context information in the cloud server 10.
- FIG. 6 is a flowchart illustrating an operation example of the proximity IoT device configuring the information processing system 1 according to an embodiment of the present disclosure.
- FIG. 6 shows an operation example of the proximity IoT device when context information is exchanged via the cloud server 10.
- FIG. 6 shows an operation example of the proximity IoT device when context information is exchanged via the cloud server 10.
- the proximity IoT device acquires sensing data from the sensor 300 (step S131), and determines whether the user is approaching based on the sensing data acquired from the sensor 300 (step S132).
- step S132 if it is determined that the user is not approaching (No in step S132), the approached IoT device returns to the sensing data acquisition process in step S131.
- step S132 if it is determined that the user is approaching (step S132, Yes), then the adjacent IoT device discovers the source of the context information (step S133), and The context information is acquired from the holding source, and the context information is interpreted (step S134). At this time, the adjacent IoT device receives the context information from the cloud server 10.
- the proximate IoT device processes the user request using the context information acquired from the proximate IoT device (step S135).
- the proximate IoT device acquires sensing data from the sensor 300 (step S136), and determines whether the user has left the proximate IoT device based on the sensing data acquired from the sensor 300 (step S137).
- step S137 If it is determined as a result of the determination in step S137 that the user has not left (step S137, No), the adjacent IoT device returns to the user request processing in step S135.
- step S137 if it is determined that the user has left (step S137, Yes), then the adjacent IoT device stores the context information in the cloud server 10 (step S138).
- the proximate IoT device may be in a sleep state before executing the series of processes shown in FIG.
- the proximity IoT device detects the proximity of the user, it cancels the sleep state and shifts to a mode for performing a normal operation.
- the proximity IoT device can exchange context information with the proximity IoT device via the cloud server 10 by executing a series of operations as shown in FIG. Then, the proximity IoT device acquires a context information via the cloud server 10 with the proximity IoT device by executing a series of operations as shown in FIG. 6, and the user has performed so far.
- the dialogue processing can be continued by taking over the dialogue processing.
- FIG. 7 is a flowchart illustrating an operation example of the proximity IoT device configuring the information processing system 1 according to an embodiment of the present disclosure.
- FIG. 7 shows an operation example of the proximity IoT device when context information is directly exchanged without using the cloud server 10.
- FIG. 7 shows an operation example of the proximity IoT device when context information is directly exchanged without using the cloud server 10.
- the proximity IoT device acquires sensing data from the sensor 300 (step S141), and determines whether the user is approaching based on the sensing data acquired from the sensor 300 (step S142).
- step S142 if it is determined that the user is not approaching (No in step S142), the approached IoT device returns to the sensing data acquisition process in step S141.
- step S142 if it is determined that the user is approaching (step S142, Yes), then the adjacent IoT device discovers the source of the context information (step S143), and The context information is acquired from the holding source, and the context information is interpreted (step S144).
- the proximity IoT device processes the user's request using the context information acquired from the proximity IoT device (step S145).
- the proximate IoT device acquires sensing data from the sensor 300 (step S146), and determines whether the user has left the proximate IoT device based on the sensing data acquired from the sensor 300 (step S147).
- step S147 If it is determined as a result of the determination in step S147 that the user has not left (step S147, No), the adjacent IoT device returns to the user request process in step S145.
- step S147 if it is determined that the user has left (step S147, Yes), then the adjacent IoT device determines whether a request for context information has been received from another IoT device ( Step S148).
- step S148 If it is determined in step S148 that there is a request for context information from another IoT device (Yes in step S148), the adjacent IoT device transmits the context information to the IoT device that transmitted the request for the context information. (Step S149).
- step S 148, No the adjacent IoT device acquires sensing data from the sensor 300 (step S 150), and uses the sensing data acquired from the sensor 300. Then, it is determined whether or not the user who once left is approaching again (step S151).
- step S151 if it is determined that the user is not approaching again (No in step S151), the adjacent IoT device returns to the determination processing for determining whether or not there is a request for context information in step S148. On the other hand, as a result of the determination in step S151, if it is determined that the user has approached again (step S151, Yes), the adjacent IoT device returns to the user request process in step S145 and executes the subsequent process.
- the proximity IoT device may shift from a mode in which a normal operation is performed to a sleep state when detecting the user's detachment.
- the proximity IoT device can exchange context information with the proximity IoT device without passing through the cloud server 10 by executing a series of operations as shown in FIG. Then, the proximity IoT device performs a series of operations as shown in FIG. 7 to obtain context information directly with the proximity IoT device without going through the cloud server 10,
- the dialog processing can be continued by taking over the dialog processing that has been performed.
- the IoT device detects the proximity of a user, and performs processing for requesting context information from another IoT device based on the proximity of the user. By executing, context information can be acquired from another IoT device.
- the proximity IoT device can be a candidate for the context information request source when the context information request source is unclear but the context information request source is unknown or when there are a plurality of context information request sources.
- the context information may be broadcasted to one or more IoT devices.
- a method for determining an IoT device that can be a candidate for example, a method of acquiring information on an IoT device that is adjacent or close to the current position and determining the adjacent or adjacent IoT device as a candidate of a context information request source and so on.
- the IoT device constantly monitors microphone input and detects “start of dialog” in response to a specific keyword in order to determine that the user wants to perform the dialog processing or starts the dialog.
- the IoT device may start accepting voice commands when a predetermined keyword is spoken.
- the IoT device may start monitoring the microphone input by detecting the proximity or presence of the user with the sensor 300, for example.
- the proximity of the user is detected by the sensor 300 and the proximity IoT device acquires context information from the proximity IoT device, but the present disclosure is not limited to such an example.
- the movement of the user who uses the proximity IoT device is detected by the sensor 300, and the moving direction of the user is detected based on the movement of the user. Then, based on the moving direction of the user, the proximity IoT device may select a proximity IoT device that the user will approach next. Then, the proximity IoT device sends information after the user selects the next closest IoT device to be approached.
- a rule that can identify the IoT device that the user will appear next may be registered in each IoT device. For example, if a sensor senses the approach of a person in the order of A and B, then a rule in which a correspondence relationship is defined such that the next IoT device to be approached is this IoT device is registered in each IoT device. Good.
- each IoT device may gradually learn rules that associate the order in which the sensor 300 senses a person with the IoT device. For example, if the sensor detects a person approaching in a certain order, the IoT device may be made to learn a rule that the IoT device is likely to be used later.
- identification information such as the IP address of each IoT device and the actual position are associated and listed, and when the sensor reacts, the context information is passed to a specific IoT device corresponding to the location of the sensor. You may do it.
- the adjacent IoT device that the user approaches next may not be specified. That is, the installer of the IoT device designates all adjacent IoT devices. The neighboring IoT device may then transfer the context to all those IoT devices.
- the information processing system according to the present embodiment is particularly useful in the following cases.
- a user who was watching a video on a certain IoT device placed in the living room moved to another room and told the IoT device in that room “Show me the continuation of the previous video”. This is when speaking.
- the IoT device in the room acquires the context information from the IoT device placed in the living room by the user's movement, and refers to the context information without requiring the user's logon process and by the utterance of the user's utterance. Then you can play the same video continuously.
- the IoT device downloads the recipe.
- the IoT device for example, a tablet terminal
- the IoT device placed in the kitchen acquires context information from the IoT device that has downloaded the recipe.
- the user utters “Put out a previous recipe” on an IoT device placed in the kitchen, the IoT device placed in the kitchen is separated by referring to the context information without requiring user logon processing.
- the recipe downloaded by the IoT device can be displayed.
- a user ID and a user ID of a user's close relative for example, father, mother, brother, etc.
- context information associated with these user IDs is extracted, and until then, IoT It is also possible to continue the dialogue based on what the close relative was talking to the device. For example, when a user asks the IoT device, “Where did your dad say you want to go?”, The IoT device inquired identifies the user ID of “dad” From the context information associated with the user ID, it is possible to extract the contents related to the trip and create an answer.
- the adjacent IoT device can acquire the context information directly or indirectly from the adjacent IoT device.
- the adjacent IoT device may store the context information for a while. If there is an extremely high need to avoid the loss of context information, the proximity IoT device may store the context information in a home server, a cloud server, or the like.
- the neighboring IoT device may hold the context information for a while after obtaining the context information from the neighboring IoT device.
- the context information passed from the neighboring IoT device to the neighboring IoT device all data may be transmitted. However, when the amount is enormous and transmission is not in time, processing performed by the neighboring IoT device is performed. Only the necessary amount may be transmitted from the neighboring IoT device to the neighboring IoT device.
- the neighboring IoT device transmits the information necessary to continue the dialogue processing with the neighboring IoT device, and the rest resumes the dialogue processing with the neighboring IoT device. Then, it may be transmitted in the background.
- the video data itself being browsed is included in the context information, that is, when the content file exists only in the neighboring IoT device, the content is streamed from the neighboring IoT device to the neighboring IoT device. You may make it do.
- the processing itself may be outsourced to another IoT device. That is, the main part of the process continues to run on the server (or the user's nearest terminal), but may follow the user by cooperating with the surrounding IoT devices.
- this method is also referred to as “distributed agent method”.
- an IoT device As a consignee and a processing time limit for consignment processing, and appropriately set which IoT device should perform context information and consignment processing.
- the context information transmission from the neighboring IoT device to the neighboring IoT device is not in time or the context information transmission fails. That is, this is a case where the user starts a dialogue process with respect to the adjacent IoT device while the context information is being transmitted from the adjacent IoT device to the adjacent IoT device.
- the neighboring IoT device and the neighboring IoT device perform the following processing, for example.
- the proximity IoT device or the proximity IoT device may make the user wait until the transfer of the context information is completed. Further, if possible, the adjacent IoT device may transfer the remaining context information in the background while restarting the dialogue processing using only the partial context information for which transmission has been completed. The proximate IoT device may give up taking over the context information from the proximate IoT device, and the proximate IoT device may ask the user again.
- the IoT device may return to an initial state, that is, a state without context information.
- the IoT device may store the context information itself so that it can be resumed at any time.
- the IoT device stores the context information itself so that, for example, a dialogue process with the user such as “It was a trip that was considered last week” or “In the middle of searching for an inn”. Can be resumed.
- the IoT device may define a specific keyword for resetting the previous interaction processing. For example, when the user speaks the specific keyword again, the IoT device may return to the state before starting the interactive process with the user. Further, when the user utters a switching-specific keyword such as “by the way ...”, the IoT device may return to the state before starting the interactive process with the user.
- the IoT device may save the context information itself so that it can be resumed at any time.
- the adjacent IoT device When acquiring the context information from the adjacent IoT device, acquires new context information in terms of time when acquiring the context information for the first time, and acquires past data when the user designates the content specifying the time. It may be.
- the proximity IoT device acquires new context information in terms of time when it is first acquired, but when the user designates content specifying a time such as “one week ago”, the specified one week ago
- the context information may be acquired from the proximity IoT device or the cloud server 10.
- the proximity IoT device receives sensing data from the sensor 300 and estimates a user's moving direction from the sensing data, thereby determining a proximity IoT device that is considered to be operated next by the user, and the proximity
- the context information may be transmitted to the IoT device. If the adjacent IoT device has stopped operating, the adjacent IoT device may transmit the context information after transmitting a command for starting the adjacent IoT device.
- the proximity IoT device receives the context information from the adjacent IoT device and further detects the movement of the user, the received context information is transmitted to another IoT device that is considered to be located in the movement direction of the user. May be.
- the neighboring IoT device transmits the received context information to another IoT device, the neighboring IoT device may delete the context information received from the neighboring IoT device.
- FIG. 8 is a block diagram illustrating a hardware configuration example of the information processing apparatus 100 according to the embodiment of the present disclosure.
- Each of the above algorithms can be executed using, for example, the hardware configuration of the information processing apparatus shown in FIG. That is, the processing of each algorithm is realized by controlling the hardware shown in FIG. 8 using a computer program.
- the form of this hardware is arbitrary, for example, personal information terminals such as personal computers, mobile phones, PHS, PDAs, game machines, contact or non-contact IC chips, contact or non-contact ICs This includes cards, speakers, televisions, monitors, wearable devices, or various information appliances.
- PHS is an abbreviation of Personal Handy-phone System.
- the PDA is an abbreviation for Personal Digital Assistant.
- this hardware mainly includes a CPU 902, a ROM 904, a RAM 906, a host bus 908, and a bridge 910. Further, this hardware includes an external bus 912, an interface 914, an input unit 916, an output unit 918, a storage unit 920, a drive 922, a connection port 924, and a communication unit 926.
- the CPU is an abbreviation for Central Processing Unit.
- the ROM is an abbreviation for Read Only Memory.
- the RAM is an abbreviation for Random Access Memory.
- the CPU 902 functions as, for example, an arithmetic processing unit or a control unit, and controls the overall operation of each component or a part thereof based on various programs recorded in the ROM 904, the RAM 906, the storage unit 920, or the removable recording medium 928.
- the ROM 904 is a means for storing a program read by the CPU 902, data used for calculation, and the like.
- a program read by the CPU 902 various parameters that change as appropriate when the program is executed, and the like are temporarily or permanently stored.
- a host bus 908 capable of high-speed data transmission.
- the host bus 908 is connected to an external bus 912 having a relatively low data transmission speed via a bridge 910, for example.
- a bridge 910 for example.
- the input unit 916 for example, a mouse, a keyboard, a touch panel, a button, a switch, a lever, or the like is used.
- a remote controller capable of transmitting a control signal using infrared rays or other radio waves may be used.
- a display device such as a CRT, LCD, PDP, or ELD
- an audio output device such as a speaker or a headphone, a printer, a mobile phone, or a facsimile, etc.
- the above CRT is an abbreviation of Cathode Ray Tube.
- the LCD is an abbreviation for Liquid Crystal Display.
- the PDP is an abbreviation for Plasma Display Panel.
- the above ELD is an abbreviation for Electro-Luminescence Display.
- the storage unit 920 is a device for storing various data.
- a magnetic storage device such as a hard disk drive (HDD), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like is used.
- HDD hard disk drive
- the above HDD is an abbreviation for Hard Disk Drive.
- the drive 922 is a device that reads information recorded on a removable recording medium 928 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, or writes information to the removable recording medium 928.
- the removable recording medium 928 is, for example, a DVD medium, a Blu-ray medium, an HD DVD medium, various semiconductor storage media, or the like.
- the removable recording medium 928 may be, for example, an IC card on which a non-contact type IC chip is mounted, an electronic device, or the like.
- the above IC is an abbreviation for Integrated Circuit.
- the connection port 924 is a port for connecting an external connection device 930 such as a USB port, an IEEE 1394 port, a SCSI, an RS-232C port, or an optical audio terminal.
- the external connection device 930 is, for example, a printer, a portable music player, a digital camera, a digital video camera, or an IC recorder.
- the USB is an abbreviation for Universal Serial Bus.
- the SCSI is an abbreviation for Small Computer System Interface.
- the communication unit 926 is a communication device for connecting to the network 932.
- a wired or wireless LAN for example, a wired or wireless LAN, Bluetooth (registered trademark), or a WUSB communication card, an optical communication router, an ADSL router, or a contact Or a device for non-contact communication.
- the network 932 connected to the communication unit 926 is configured by a wired or wireless network, such as the Internet, home LAN, infrared communication, visible light communication, broadcast, or satellite communication.
- the above LAN is an abbreviation for Local Area Network.
- the WUSB is an abbreviation for Wireless USB.
- the above ADSL is an abbreviation for Asymmetric Digital Subscriber Line.
- Each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Such a configuration can be appropriately changed according to the technical level at the time of implementation.
- the proximity of a user is detected, and based on the proximity of the user, context information that is processing content between the user and another device is transmitted from another device.
- An information processing apparatus 100 to be acquired is provided.
- the information processing apparatus 100 acquires the context information from another apparatus based on the proximity of the user, thereby performing processing (interaction processing) performed between the other apparatus and the user. ) Can be taken over.
- the information processing apparatus 200 that provides context information to other devices based on a request for context information from another device that has detected the proximity of the user is provided.
- the information processing apparatus 200 provides context information to other apparatuses based on a request for context information from another apparatus that has detected the proximity of the user. It is possible to take over the processing (interactive processing) performed between the two devices to another device.
- each step in the processing executed by each device in this specification does not necessarily have to be processed in chronological order in the order described as a sequence diagram or flowchart.
- each step in the processing executed by each device may be processed in an order different from the order described as the flowchart, or may be processed in parallel.
- An information processing apparatus An information unit comprising: a control unit that detects a user's departure from the information processing device and controls to send at least a part of the context information that has been exchanged with the user until then to another device; Processing equipment.
- the information processing apparatus according to (1) wherein the control unit performs control to directly transmit the context information to the other apparatus.
- the control unit controls the context information to be transmitted to the other apparatus via a server apparatus.
- control unit detects a source of the context information using a user ID and extracts the context information.
- control unit extracts the context information based on the user ID.
- control unit extracts the context information based on another user ID different from the user ID.
- control unit detects a user's movement based on sensing data to detect a moving direction of the user.
- the said control part is information processing apparatus as described in said (8) which detects a user's behavior using the image imaged with the imaging device, and detects this user's moving direction.
- the control unit broadcasts the context information to a possible device. Processing equipment.
- the information processing apparatus according to any one of (1) to (12), wherein the context information is information based on a voice input by a user.
- An information processing apparatus An information processing apparatus, An information processing apparatus comprising: a control unit that detects proximity of the user to the information processing apparatus, and controls to receive at least a part of the context information that has been exchanged with the user from another apparatus.
- the control unit controls the context information to be received directly from another apparatus with which the user has exchanged.
- the information processing apparatus according to (14) or (15), wherein the control unit controls the context information to be received from another apparatus that the user has exchanged so far via a server apparatus.
- the information processing apparatus according to any one of (14) to (16), wherein the control unit detects a user's movement based on sensing data to detect a moving direction of the user.
- the said control part is information processing apparatus as described in said (17) which detects a user's movement using the image imaged with the imaging device, and detects this user's moving direction.
- the control unit controls to receive the context information generated in a part of a time period.
- the control unit determines a reception source of the context information using information in which an apparatus and a position of the apparatus are associated with each other.
- control unit When the control unit further detects the movement of the user, the control unit performs control to transmit at least a part of the context information to another device in the detected moving direction, any one of (14) to (20) The information processing apparatus described in 1.
- control unit controls to delete the received context information.
- control unit controls to delete the received context information.
- context information is information based on a voice input by a user.
- An information processing method comprising: detecting a user's departure from an information processing device, and controlling to send at least part of the context information that has been exchanged with the user until then to another device .
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
1.本開示の一実施形態
1.1.背景
1.2.構成例
1.3.動作例
2.ハードウェア構成例
3.まとめ
[1.1.背景]
本開示の実施形態について詳細に説明する前に、まず本開示の実施形態の背景について説明する。背景について説明した後に、本開示の実施形態について詳細に説明する。
まず、本開示の一実施形態に係る情報処理システムの全体構成例について説明する。図1は、本開示の一実施形態に係る情報処理システム1の全体構成例を示す説明図である。図1には、IoTデバイスである情報処理装置100、200の間でコンテキスト情報を連携させて、ユーザが情報処理装置100との間で行ってきた処理を情報処理装置200が引き継げるようにした、情報処理システム1の全体構成例が示されている。
次に、図8を参照して、本開示の一実施形態にかかる情報処理装置100のハードウェア構成について説明する。図8は、本開示の実施形態にかかる情報処理装置100のハードウェア構成例を示すブロック図である。上記の各アルゴリズムは、例えば、図8に示す情報処理装置のハードウェア構成を用いて実行することが可能である。つまり、当該各アルゴリズムの処理は、コンピュータプログラムを用いて図8に示すハードウェアを制御することにより実現される。なお、このハードウェアの形態は任意であり、例えば、パーソナルコンピュータ、携帯電話、PHS、PDA等の携帯情報端末、ゲーム機、接触式又は非接触式のICチップ、接触式又は非接触式のICカード、スピーカ、テレビ、モニタ、ウェアラブル機器、又は種々の情報家電がこれに含まれる。但し、上記のPHSは、Personal Handy-phone Systemの略である。また、上記のPDAは、Personal Digital Assistantの略である。
以上説明したように本開示の一実施形態によれば、ユーザの近接を検知し、そのユーザの近接に基づいて、ユーザと他の装置との間の処理内容であるコンテキスト情報を他の装置から取得する、情報処理装置100が提供される。
(1)
情報処理装置であって、
ユーザの前記情報処理装置からの離脱を検出し、他の装置に向けて、それまで前記ユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を送信するよう制御する制御部を備える、情報処理装置。
(2)
前記制御部は、前記コンテキスト情報を前記他の装置へ直接送信するよう制御する、前記(1)に記載の情報処理装置。
(3)
前記制御部は、前記コンテキスト情報を、サーバ装置を介して前記他の装置へ送信するよう制御する、前記(1)に記載の情報処理装置。
(4)
前記制御部は、ユーザの前記情報処理装置からの移動方向を検出する、前記(1)~(3)のいずれかに記載の情報処理装置。
(5)
前記制御部は、ユーザIDを用いて前記コンテキスト情報の保有元を検出して、前記コンテキスト情報を抽出する、前記(1)~(4)のいずれかに記載の情報処理装置。
(6)
前記制御部は、前記ユーザIDに基づく前記コンテキスト情報を抽出する、前記(5)に記載の情報処理装置。
(7)
前記制御部は、前記ユーザIDとは異なる他のユーザIDに基づく前記コンテキスト情報を抽出する、前記(5)に記載の情報処理装置。
(8)
前記制御部は、センシングデータに基づいてユーザの挙動を検知して該ユーザの移動方向を検出する、前記(1)~(7)のいずれかに記載の情報処理装置。
(9)
前記制御部は、撮像装置で撮像された画像を用いてユーザの挙動を検知して該ユーザの移動方向を検出する、前記(8)に記載の情報処理装置。
(10)
前記制御部は、一部の時間帯で発生した前記コンテキスト情報を送信するよう制御する、前記(1)~(9)のいずれかに記載の情報処理装置。
(11)
前記制御部は、装置と、該装置の位置とが対応付けられた情報を用いて前記コンテキスト情報の送信先を決定する、前記(1)~(10)のいずれかに記載の情報処理装置。
(12)
前記制御部は、前記コンテキスト情報の送信先を一意に決定できない場合は、可能性のある装置に向けて前記コンテキスト情報をブロードキャスト送信する、前記(1)~(10)のいずれかに記載の情報処理装置。
(13)
前記コンテキスト情報は、ユーザによる音声入力に基づいた情報である、前記(1)~(12)のいずれかに記載の情報処理装置。
(14)
情報処理装置であって、
ユーザの前記情報処理装置への近接を検出し、他の装置から、それまでユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を受信するよう制御する制御部を備える、情報処理装置。
(15)
前記制御部は、前記コンテキスト情報を、前記ユーザがそれまでやり取りしていた他の装置から直接受信するよう制御する、前記(14)に記載の情報処理装置。
(16)
前記制御部は、前記コンテキスト情報を、サーバ装置を介して前記ユーザがそれまでやり取りしていた他の装置から受信するよう制御する、前記(14)または(15)に記載の情報処理装置。
(17)
前記制御部は、センシングデータに基づいてユーザの挙動を検知して該ユーザの移動方向を検出する、前記(14)~(16)のいずれかに記載の情報処理装置。
(18)
前記制御部は、撮像装置で撮像された画像を用いてユーザの挙動を検知して該ユーザの移動方向を検出する、前記(17)に記載の情報処理装置。
(19)
前記制御部は、一部の時間帯で発生した前記コンテキスト情報を受信するよう制御する、前記(14)~(18)のいずれかに記載の情報処理装置。
(20)
前記制御部は、装置と、該装置の位置とが対応付けられた情報を用いて前記コンテキスト情報の受信元を決定する、前記(14)~(19)のいずれかに記載の情報処理装置。
(21)
前記制御部は、さらにユーザの移動を検出すると、検出した移動方向にある他の装置に向けて前記コンテキスト情報の少なくとも一部を送信するよう制御する、前記(14)~(20)のいずれかに記載の情報処理装置。
(22)
前記制御部は、ユーザとの間のやりとりの前に、ユーザの移動を検出すると、受信した前記コンテキスト情報を消去するよう制御する、前記(21)に記載の情報処理装置。
(23)
前記コンテキスト情報は、ユーザによる音声入力に基づいた情報である、前記(14)~(22)のいずれかに記載の情報処理装置。
(24)
ユーザの情報処理装置からの離脱を検出し、他の装置に向けて、それまで前記ユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を送信するよう制御することを含む、情報処理方法。
(25)
ユーザの情報処理装置からの離脱を検出し、他の装置に向けて、それまで前記ユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を送信するよう制御することをコンピュータに実行させる、コンピュータプログラム。
(26)
ユーザの情報処理装置への近接を検出し、他の装置から、それまでユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を受信するよう制御することを含む、情報処理方法。
(27)
ユーザの情報処理装置への近接を検出し、他の装置から、それまでユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を受信するよう制御することをコンピュータに実行させる、コンピュータプログラム。
110 制御部
120 検出部
130 コンテキスト情報管理部
300 センサ
400a、400b 入出力デバイス
Claims (27)
- 情報処理装置であって、
ユーザの前記情報処理装置からの離脱を検出し、他の装置に向けて、それまで前記ユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を送信するよう制御する制御部を備える、情報処理装置。 - 前記制御部は、前記コンテキスト情報を前記他の装置へ直接送信するよう制御する、請求項1に記載の情報処理装置。
- 前記制御部は、前記コンテキスト情報を、サーバ装置を介して前記他の装置へ送信するよう制御する、請求項1に記載の情報処理装置。
- 前記制御部は、ユーザの前記情報処理装置からの移動方向を検出する、請求項1に記載の情報処理装置。
- 前記制御部は、ユーザIDを用いて前記コンテキスト情報の保有元を検出して、前記コンテキスト情報を抽出する、請求項1に記載の情報処理装置。
- 前記制御部は、前記ユーザIDに基づく前記コンテキスト情報を抽出する、請求項5に記載の情報処理装置。
- 前記制御部は、前記ユーザIDとは異なる他のユーザIDに基づく前記コンテキスト情報を抽出する、請求項5に記載の情報処理装置。
- 前記制御部は、センシングデータに基づいてユーザの挙動を検知して該ユーザの移動方向を検出する、請求項1に記載の情報処理装置。
- 前記制御部は、撮像装置で撮像された画像を用いてユーザの挙動を検知して該ユーザの移動方向を検出する、請求項8に記載の情報処理装置。
- 前記制御部は、一部の時間帯で発生した前記コンテキスト情報を送信するよう制御する、請求項1に記載の情報処理装置。
- 前記制御部は、装置と、該装置の位置とが対応付けられた情報を用いて前記コンテキスト情報の送信先を決定する、請求項1に記載の情報処理装置。
- 前記制御部は、前記コンテキスト情報の送信先を一意に決定できない場合は、可能性のある装置に向けて前記コンテキスト情報をブロードキャスト送信する、請求項1に記載の情報処理装置。
- 前記コンテキスト情報は、ユーザによる音声入力に基づいた情報である、請求項1に記載の情報処理装置。
- 情報処理装置であって、
ユーザの前記情報処理装置への近接を検出し、他の装置から、それまでユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を受信するよう制御する制御部を備える、情報処理装置。 - 前記制御部は、前記コンテキスト情報を、前記ユーザがそれまでやり取りしていた他の装置から直接受信するよう制御する、請求項14に記載の情報処理装置。
- 前記制御部は、前記コンテキスト情報を、サーバ装置を介して前記ユーザがそれまでやり取りしていた他の装置から受信するよう制御する、請求項14に記載の情報処理装置。
- 前記制御部は、センシングデータに基づいてユーザの挙動を検知して該ユーザの移動方向を検出する、請求項14に記載の情報処理装置。
- 前記制御部は、撮像装置で撮像された画像を用いてユーザの挙動を検知して該ユーザの移動方向を検出する、請求項17に記載の情報処理装置。
- 前記制御部は、一部の時間帯で発生した前記コンテキスト情報を受信するよう制御する、請求項14に記載の情報処理装置。
- 前記制御部は、装置と、該装置の位置とが対応付けられた情報を用いて前記コンテキスト情報の受信元を決定する、請求項14に記載の情報処理装置。
- 前記制御部は、さらにユーザの移動を検出すると、検出した移動方向にある他の装置に向けて前記コンテキスト情報の少なくとも一部を送信するよう制御する、請求項14に記載の情報処理装置。
- 前記制御部は、ユーザとの間のやりとりの前に、ユーザの移動を検出すると、受信した前記コンテキスト情報を消去するよう制御する、請求項21に記載の情報処理装置。
- 前記コンテキスト情報は、ユーザによる音声入力に基づいた情報である、請求項14に記載の情報処理装置。
- ユーザの情報処理装置からの離脱を検出し、他の装置に向けて、それまで前記ユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を送信するよう制御することを含む、情報処理方法。
- ユーザの情報処理装置からの離脱を検出し、他の装置に向けて、それまで前記ユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を送信するよう制御することをコンピュータに実行させる、コンピュータプログラム。
- ユーザの情報処理装置への近接を検出し、他の装置から、それまでユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を受信するよう制御することを含む、情報処理方法。
- ユーザの情報処理装置への近接を検出し、他の装置から、それまでユーザとの間でやり取りされていたコンテキスト情報の少なくとも一部を受信するよう制御することをコンピュータに実行させる、コンピュータプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016556426A JP6645438B2 (ja) | 2014-10-27 | 2015-09-09 | 情報処理装置、情報処理方法およびコンピュータプログラム |
CN201580053515.0A CN106796570B (zh) | 2014-10-27 | 2015-09-09 | 信息处理设备、信息处理方法和计算机程序 |
US15/514,590 US9936355B2 (en) | 2014-10-27 | 2015-09-09 | Information processing apparatus, information processing method, and computer program |
EP15854364.5A EP3214555B1 (en) | 2014-10-27 | 2015-09-09 | Information processing device, information processing method, and computer program for context sharing |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014218111 | 2014-10-27 | ||
JP2014-218111 | 2014-10-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016067765A1 true WO2016067765A1 (ja) | 2016-05-06 |
Family
ID=55857111
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/075629 WO2016067765A1 (ja) | 2014-10-27 | 2015-09-09 | 情報処理装置、情報処理方法およびコンピュータプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US9936355B2 (ja) |
EP (1) | EP3214555B1 (ja) |
JP (1) | JP6645438B2 (ja) |
CN (1) | CN106796570B (ja) |
WO (1) | WO2016067765A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017215639A (ja) * | 2016-05-30 | 2017-12-07 | シャープ株式会社 | ネットワークシステム、音声出力方法、サーバおよび電気機器 |
JP2019527403A (ja) * | 2016-06-23 | 2019-09-26 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 通知に対するユーザ可用性又は受容性を測定する方法、装置及び機械可読媒体 |
WO2020026799A1 (ja) * | 2018-07-31 | 2020-02-06 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
WO2020049921A1 (ja) * | 2018-09-07 | 2020-03-12 | ソニー株式会社 | 端末装置、端末装置の制御方法および記憶媒体 |
JP2020521164A (ja) * | 2017-05-16 | 2020-07-16 | グーグル エルエルシー | デバイス間ハンドオフ |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019040468A (ja) * | 2017-08-25 | 2019-03-14 | 富士ゼロックス株式会社 | 情報処理装置及びプログラム |
US10877781B2 (en) * | 2018-07-25 | 2020-12-29 | Sony Corporation | Information processing apparatus and information processing method |
JP7180330B2 (ja) * | 2018-11-30 | 2022-11-30 | 株式会社リコー | 情報処理システム、情報処理装置、および方法 |
WO2021029457A1 (ko) * | 2019-08-13 | 2021-02-18 | 엘지전자 주식회사 | 사용자에게 정보를 제공하는 인공 지능 서버 및 그 방법 |
CN117396848A (zh) * | 2021-06-22 | 2024-01-12 | 瑞典爱立信有限公司 | 通过i/o设备向用户提供通信服务 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003158664A (ja) * | 2001-11-21 | 2003-05-30 | Matsushita Electric Ind Co Ltd | カメラ制御装置 |
WO2004077291A1 (ja) * | 2003-02-25 | 2004-09-10 | Matsushita Electric Industrial Co., Ltd. | アプリケーションプログラムの予測方法及び移動体端末 |
JP2006172440A (ja) * | 2004-11-19 | 2006-06-29 | Fujitsu Ltd | アプリケーション状態情報転送システム |
JP2010205111A (ja) * | 2009-03-05 | 2010-09-16 | Nippon Telegr & Teleph Corp <Ntt> | コンテキスト再現システム、コンテキスト再現方法、第1端末装置、第2端末装置、コンテキスト取得装置、または蓄積装置、それらのプログラム |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SE519854C2 (sv) * | 2002-02-15 | 2003-04-15 | Hotsip Ab | Ett förfarande för att distribuera information |
JP3698716B2 (ja) * | 2003-02-25 | 2005-09-21 | 松下電器産業株式会社 | アプリケーションプログラムの予測方法及び移動体端末 |
JP2007074710A (ja) * | 2005-08-12 | 2007-03-22 | Mitsubishi Materials Corp | オーディオ・ビジュアルデータ通信システム、オーディオ・ビジュアルデータ通信方法、オーディオ・ビジュアルデータ通信プログラム及びオーディオ・ビジュアルデータ通信システム用の鑑賞空間切替装置 |
US8269834B2 (en) * | 2007-01-12 | 2012-09-18 | International Business Machines Corporation | Warning a user about adverse behaviors of others within an environment based on a 3D captured image stream |
EP2522194B1 (en) | 2010-01-06 | 2014-05-21 | InterDigital Patent Holdings, Inc. | Method and apparatus for assisted/coordinated intra-home communications |
US8880051B2 (en) * | 2012-11-16 | 2014-11-04 | Intel Corporation | Automatic seamless context sharing across multiple devices |
JP5356615B1 (ja) * | 2013-02-01 | 2013-12-04 | パナソニック株式会社 | 顧客行動分析装置、顧客行動分析システムおよび顧客行動分析方法 |
-
2015
- 2015-09-09 CN CN201580053515.0A patent/CN106796570B/zh active Active
- 2015-09-09 EP EP15854364.5A patent/EP3214555B1/en active Active
- 2015-09-09 US US15/514,590 patent/US9936355B2/en active Active
- 2015-09-09 WO PCT/JP2015/075629 patent/WO2016067765A1/ja active Application Filing
- 2015-09-09 JP JP2016556426A patent/JP6645438B2/ja not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003158664A (ja) * | 2001-11-21 | 2003-05-30 | Matsushita Electric Ind Co Ltd | カメラ制御装置 |
WO2004077291A1 (ja) * | 2003-02-25 | 2004-09-10 | Matsushita Electric Industrial Co., Ltd. | アプリケーションプログラムの予測方法及び移動体端末 |
JP2006172440A (ja) * | 2004-11-19 | 2006-06-29 | Fujitsu Ltd | アプリケーション状態情報転送システム |
JP2010205111A (ja) * | 2009-03-05 | 2010-09-16 | Nippon Telegr & Teleph Corp <Ntt> | コンテキスト再現システム、コンテキスト再現方法、第1端末装置、第2端末装置、コンテキスト取得装置、または蓄積装置、それらのプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3214555A4 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017215639A (ja) * | 2016-05-30 | 2017-12-07 | シャープ株式会社 | ネットワークシステム、音声出力方法、サーバおよび電気機器 |
JP2019527403A (ja) * | 2016-06-23 | 2019-09-26 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 通知に対するユーザ可用性又は受容性を測定する方法、装置及び機械可読媒体 |
US11166087B2 (en) | 2017-05-16 | 2021-11-02 | Google Llc | Cross-device handoffs |
JP2020521164A (ja) * | 2017-05-16 | 2020-07-16 | グーグル エルエルシー | デバイス間ハンドオフ |
JP2021007057A (ja) * | 2017-05-16 | 2021-01-21 | グーグル エルエルシーGoogle LLC | デバイス間ハンドオフ |
JP2021072137A (ja) * | 2017-05-16 | 2021-05-06 | グーグル エルエルシーGoogle LLC | デバイス間ハンドオフ |
JP7216751B2 (ja) | 2017-05-16 | 2023-02-01 | グーグル エルエルシー | デバイス間ハンドオフ |
US11641535B2 (en) | 2017-05-16 | 2023-05-02 | Google Llc | Cross-device handoffs |
JPWO2020026799A1 (ja) * | 2018-07-31 | 2021-08-19 | ソニーグループ株式会社 | 情報処理装置、情報処理方法、およびプログラム |
WO2020026799A1 (ja) * | 2018-07-31 | 2020-02-06 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
JP7290154B2 (ja) | 2018-07-31 | 2023-06-13 | ソニーグループ株式会社 | 情報処理装置、情報処理方法、およびプログラム |
WO2020049921A1 (ja) * | 2018-09-07 | 2020-03-12 | ソニー株式会社 | 端末装置、端末装置の制御方法および記憶媒体 |
JPWO2020049921A1 (ja) * | 2018-09-07 | 2021-09-16 | ソニーグループ株式会社 | 端末装置、端末装置の制御方法および記憶媒体 |
JP7396286B2 (ja) | 2018-09-07 | 2023-12-12 | ソニーグループ株式会社 | 端末装置、端末装置の制御方法および記憶媒体 |
US11979511B2 (en) | 2018-09-07 | 2024-05-07 | Sony Group Corporation | Terminal device, terminal device control method, and memory medium |
Also Published As
Publication number | Publication date |
---|---|
US9936355B2 (en) | 2018-04-03 |
CN106796570B (zh) | 2020-10-09 |
JP6645438B2 (ja) | 2020-02-14 |
CN106796570A (zh) | 2017-05-31 |
JPWO2016067765A1 (ja) | 2017-08-31 |
EP3214555B1 (en) | 2019-12-25 |
EP3214555A4 (en) | 2018-05-16 |
EP3214555A1 (en) | 2017-09-06 |
US20170238144A1 (en) | 2017-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016067765A1 (ja) | 情報処理装置、情報処理方法およびコンピュータプログラム | |
US11212486B1 (en) | Location based device grouping with voice control | |
US20210050013A1 (en) | Information processing device, information processing method, and program | |
US10997973B2 (en) | Voice recognition system having expanded spatial range | |
US11514917B2 (en) | Method, device, and system of selectively using multiple voice data receiving devices for intelligent service | |
CN109508167B (zh) | 显示装置和在语音识别***中控制显示装置的方法 | |
JP6503557B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
JP2021007057A (ja) | デバイス間ハンドオフ | |
CN112740626B (zh) | 用于通过使多个电子装置协同工作来提供通知的方法和设备 | |
US11172007B2 (en) | Technologies for a seamless data streaming experience | |
KR102374584B1 (ko) | 영상을 디스플레이 하는 방법 및 디바이스 | |
US20200053399A1 (en) | Method for contents playback with continuity and electronic device therefor | |
US20220172722A1 (en) | Electronic device for processing user utterance and method for operating same | |
US20210110837A1 (en) | Electronic device supporting improved speech recognition | |
JP6973380B2 (ja) | 情報処理装置、および情報処理方法 | |
US20210224066A1 (en) | Information processing device and information processing method | |
CN106535136B (zh) | 电子装置的侦测注册方法及*** | |
KR102642268B1 (ko) | 공유 업무 처리 방법 및 장치 | |
US20240129370A1 (en) | A computer software module arrangement, a circuitry arrangement, an arrangement and a method for an improved user interface for internet of things devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15854364 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016556426 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15514590 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2015854364 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015854364 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |