WO2021208432A1 - 交互方法、装置、交互***、电子设备及存储介质 - Google Patents
交互方法、装置、交互***、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2021208432A1 WO2021208432A1 PCT/CN2020/130092 CN2020130092W WO2021208432A1 WO 2021208432 A1 WO2021208432 A1 WO 2021208432A1 CN 2020130092 W CN2020130092 W CN 2020130092W WO 2021208432 A1 WO2021208432 A1 WO 2021208432A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- action
- interactive object
- trigger operation
- display device
- display
- Prior art date
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 98
- 238000000034 method Methods 0.000 title claims abstract description 89
- 230000009471 action Effects 0.000 claims abstract description 279
- 230000004044 response Effects 0.000 claims abstract description 142
- 230000002452 interceptive effect Effects 0.000 claims description 285
- 230000006870 function Effects 0.000 claims description 65
- 238000013507 mapping Methods 0.000 claims description 49
- 230000000704 physical effect Effects 0.000 claims description 38
- 238000004590 computer program Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 description 31
- 238000010586 diagram Methods 0.000 description 13
- 230000001960 triggered effect Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 8
- 238000009877 rendering Methods 0.000 description 5
- 238000012546 transfer Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008921 facial expression Effects 0.000 description 3
- 108091026890 Coding region Proteins 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
Definitions
- the present disclosure relates to the field of computer technology, and in particular to an interaction method, device, interaction system, electronic equipment, and storage medium.
- the way of human-computer interaction is mostly: the user inputs based on keys, touch, and voice, and the device responds by presenting images, texts or virtual characters on the display screen. At present, most virtual characters are improved on the basis of voice assistants, and the interaction between users and virtual characters is still on the surface.
- the embodiments of the present disclosure provide at least one interaction method, device, interaction system, electronic equipment, and storage medium.
- embodiments of the present disclosure provide an interaction method, the method including:
- the interactive object displayed by the display device is controlled to respond, and the response includes an action corresponding to the action identifier of the interactive object.
- the embodiment of the present disclosure proposes a solution capable of responding to a user's trigger operation by an interactive object.
- the action identifier corresponding to the first trigger operation can be used to control the interactive object to respond to the user through an anthropomorphic action, and the response includes
- the action corresponding to the action mark can make the interaction process more realistic and smooth, and can effectively improve the interaction experience.
- the embodiments of the present disclosure can also be applied to scenarios where interactive objects introduce the functions provided by the display device, which can facilitate some user groups who have weak text understanding or do not have time to view text guidance information to quickly obtain what they need. information.
- the obtaining the action identifier of the interaction object used to respond to the first trigger operation includes:
- the action identifier corresponding to the first trigger operation is acquired.
- the obtaining the action identifier of the interaction object used to respond to the first trigger operation includes:
- the action identifier corresponding to the response data used to respond to the first trigger operation is acquired.
- the response data includes text data
- the preset mapping relationship includes a preset mapping relationship between key text data and action identifiers in the text data.
- the action used to respond to the first trigger operation can be found quickly and accurately ID, so as to control the interactive object to make an action corresponding to the action ID in response to the first trigger operation.
- the action identifier includes a physical action identifier of the interactive object, and/or a display position identifier of the interactive object when the interactive object performs an action.
- the physical action identification can be used to identify the specific physical action that the interactive object makes during the response process
- the display position identification can be used to identify that the interactive object is displayed in the display area of the display device during the response process. At least one of the above two identifiers can improve the display effect of interactive objects in the display process.
- the receiving the first trigger operation on the display device includes:
- the action identifier includes the physical action identifier of the interactive object
- the action corresponding to the action identifier includes the body of the interactive object pointing to the target display area of the display device action
- the action identifier includes the display position identifier of the interactive object when the action is performed
- the action corresponding to the action identifier includes the action performed by the interactive object on the target display position
- the action corresponding to the action identification includes the physical action of the interactive object pointing to the target display area at the target display position.
- the body action identifier of the interactive object may be a directional action identifier, such as pointing to a specific area, so that the user can quickly learn the specific content of the interactive object currently responding during the interaction, so that the interaction
- the process is more realistic and smooth, and/or the display position identification of the interactive object can be a directional action or other action that identifies the interactive object on the target display position, so that the user can watch the response content easily and avoid possible occlusion And so on, can achieve better interactive effects.
- controlling the interactive object displayed by the display device to respond includes:
- the display screen of the interactive object is rendered, and the display screen includes any one of the following screen contents: the screen content of the interactive object making a physical movement corresponding to the physical movement identifier;
- the interactive object makes the screen content of the body motion corresponding to the body motion identifier on the target display position corresponding to the display position identifier;
- controlling the display device to display the display screen of the interactive object includes:
- Control the display device to display the display screen of the interactive object on the background layer of the display interface of the target application, and the display interface is located on the background layer.
- the two can respond separately, which can avoid possible conflicts between the response process of the interactive object and the operation of the target application.
- the method further includes:
- Control the display device to play the voice data in the response data, and/or display a prompt box of the response data on the interface of the display device.
- the response is not limited to the response related to the action identifier of the interactive object, and the response can also be achieved by playing voice or displaying a prompt box, so that the presentation mode of response data is diversified and the interactive experience is improved.
- embodiments of the present disclosure provide an interactive system, including: a display device and a server;
- the display device is configured to obtain a first trigger operation on the display device and send the first trigger operation to the server, and control the interactive object displayed by the display device to respond based on the server's instruction;
- the server is configured to receive the first trigger operation; obtain an action identifier of an interactive object used to respond to the first trigger operation; based on the action identifier, instruct the display device to control the interactive object to respond, The response includes an action corresponding to the action identifier of the interaction object.
- an interaction device including:
- the receiving module is used to receive the first trigger operation on the display device
- An obtaining module configured to obtain an action identifier of an interactive object used to respond to the first trigger operation
- the control module is configured to control the interactive object displayed by the display device to respond based on the action identifier, and the response includes an action corresponding to the action identifier of the interactive object.
- the method when the acquiring module is used to acquire the action identifier of the interaction object used to respond to the first trigger operation, the method includes:
- the action identifier corresponding to the first trigger operation is acquired.
- the method when the acquiring module is used to acquire the action identifier of the interaction object used to respond to the first trigger operation, the method includes:
- the action identifier corresponding to the response data used to respond to the first trigger operation is acquired.
- the response data includes text data
- the preset mapping relationship includes a preset mapping relationship between key text data and action identifiers in the text data.
- the action identifier includes a physical action identifier of the interactive object, and/or a display position identifier of the interactive object when the interactive object performs an action.
- the method when the receiving module is used to receive the first trigger operation on the display device, the method includes:
- the action identifier includes the physical action identifier of the interactive object
- the action corresponding to the action identifier includes the body of the interactive object pointing to the target display area of the display device action
- the action identifier includes the display position identifier of the interactive object when the action is taken
- the action corresponding to the action identifier includes the action made by the interactive object on the target display position
- the action corresponding to the action identifier includes the physical movement of the interactive object pointing to the target display area at the target display position;
- the target display area is the preset display area or a display area associated with the preset display area.
- control module when the control module is configured to control the interactive object displayed by the display device to respond based on the action identifier, the control module includes:
- the display screen of the interactive object is rendered, and the display screen includes any one of the following screen contents: the screen content of the interactive object making a physical movement corresponding to the physical movement identifier;
- the interactive object makes the screen content of the body motion corresponding to the body motion identifier on the target display position corresponding to the display position identifier;
- control module when the control module is used to control the display device to display the display screen of the interactive object, the control module includes:
- Control the display device to display the display screen of the interactive object on the background layer of the display interface of the target application, and the display interface is located on the background layer.
- control module is further used to:
- Control the display device to play the voice data in the response data, and/or display a prompt box of the response data on the interface of the display device.
- an embodiment of the present disclosure provides an electronic device, including a processor, a memory, and a bus.
- the memory stores machine-readable instructions executable by the processor.
- the processing communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the processor executes the interaction method as described in the first aspect.
- embodiments of the present disclosure provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium.
- the processor executes the Interactive method.
- FIG. 1 shows a schematic diagram of a display device provided by an embodiment of the present disclosure
- FIG. 2 shows a flowchart of an interaction method provided by an embodiment of the present disclosure
- FIG. 3 shows a schematic flowchart of a response process based on a first trigger operation provided by an embodiment of the present disclosure
- FIG. 4 shows a schematic flowchart of another response process based on a first trigger operation provided by an embodiment of the present disclosure
- FIG. 5 shows a schematic diagram of the first display interface displaying response content of interactive objects provided by an embodiment of the present disclosure
- FIG. 6 shows a schematic diagram of a second display interface displaying response content of interactive objects provided by an embodiment of the present disclosure
- FIG. 7 shows a schematic diagram of a third display interface displaying response content of interactive objects provided by an embodiment of the present disclosure
- FIG. 8 shows a schematic diagram of a nine-square grid of a display interface provided by an embodiment of the present disclosure
- FIG. 9 shows a schematic diagram of an interface of an interactive object provided by an embodiment of the present disclosure.
- FIG. 10 shows a schematic diagram of an interface of another interactive object provided by an embodiment of the present disclosure.
- FIG. 11 shows a schematic diagram of a specific processing flow for controlling an interactive object displayed by a display device to respond according to an embodiment of the present disclosure
- FIG. 12 shows a schematic diagram of a display screen of an interactive object provided by an embodiment of the present disclosure
- FIG. 13 shows a schematic structural diagram of an interactive system provided by an embodiment of the present disclosure
- FIG. 14 shows a schematic structural diagram of an interactive device provided by an embodiment of the present disclosure
- FIG. 15 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- At least one embodiment of the present disclosure provides an interactive method that can be executed by electronic devices such as a display device or a server.
- the display device is, for example, a terminal device. Game consoles, desktops, advertising machines, vehicle-mounted terminals, virtual reality (VR) equipment, augmented reality (Augmented Reality, AR) equipment, etc.
- the server may include a local server or a cloud server. The present disclosure does not limit the specific form of the display device and the server.
- the interactive object can be displayed on the display device.
- the interactive object can be any interactive object that can interact with the target object. It can be a virtual character, a virtual animal, a virtual item, or a cartoon image. Such as any avatar that can implement interactive functions, the presentation form of the avatar can be either 2D or 3D, which is not limited in the present disclosure.
- the target object can be a natural person, a robot, or other smart devices.
- the interaction mode between the interaction object and the target object can be an active interaction mode or a passive interaction mode.
- the target object can make a demand by making gestures or body movements, and trigger the interactive object to interact with it by means of active interaction.
- the interactive object may actively greet the target object, prompt the target object to make an action, etc., so that the target object interacts with the interactive object in a passive manner.
- FIG. 1 shows a display device proposed by at least one embodiment of the present disclosure.
- the display device is a display device with a transparent display screen, which can display a three-dimensional picture on the transparent display screen to present an interactive object with a three-dimensional effect.
- the interactive objects displayed on the transparent display screen in Figure 1 are virtual cartoon characters.
- the display device may also be a mobile terminal such as a mobile phone or a tablet computer, and an application program (APP) capable of displaying interactive objects may be installed on the mobile terminal, for example, to realize the interaction between the interactive object and the user.
- APP application program
- it may be a general-purpose application configured with a software development kit (SDK) that realizes the interactive capabilities of interactive objects.
- SDK software development kit
- some bank apps SDKs that realize the interactive capabilities of interactive objects can be embedded, and then in the case of running the bank APP, interactive objects can be called to achieve interaction with users according to requirements.
- the above-mentioned display device may be configured with a memory and a processor.
- the memory is used to store computer instructions that can run on the processor.
- the interactive object displayed on the screen responds to the target object.
- the embodiment of the present disclosure proposes a solution capable of responding to a user's trigger operation by an interactive object.
- the interactive object can respond to the user through anthropomorphic actions, so that the interaction process is smoother and the interaction experience can be effectively improved.
- the embodiments of the present disclosure can also be applied to scenarios where interactive objects introduce the functions provided by the display device, which can facilitate some user groups who have weak text understanding or do not have time to view text guidance information to quickly obtain what they need. information.
- Fig. 2 is a flowchart of an interaction method provided by an embodiment of the present disclosure.
- the interaction method includes steps 201 to 203, wherein:
- Step 201 Receive a first trigger operation on the display device.
- Step 202 Obtain an action identifier of an interactive object used to respond to the first trigger operation.
- Step 203 Based on the action identifier, control the interactive object displayed by the display device to respond, and the response includes an action corresponding to the action identifier of the interactive object.
- the foregoing interaction method may be executed by the display device, that is, the response to the first trigger operation is completed locally; the foregoing interaction method may also be executed by the server, that is, the server may complete the data response to the first trigger operation. Obtain and instruct the interactive object of the display device to respond.
- the first trigger operation on the display device is received in step 201, and the display device may detect whether the first trigger operation exists. For example, it is possible to determine whether there is a first trigger operation by detecting whether there is a touch operation on the display screen of the display device; or, by detecting whether there is a set user facial expression or user body movement in the image collected by the display device, It is determined whether there is a first trigger operation; or, by detecting a voice instruction collected by the display device, it is determined whether there is a first trigger operation, etc.
- the specific detection method may be configured by the detection capability supported by the sensor configured in the display device, which is not limited in the present disclosure.
- the display device can report the detected first trigger operation to the server, so that the server can obtain various types of data used to respond to the first trigger operation, and then display the information required by the display device.
- the data is sent to the display device, and the display device displays the response of the interactive object.
- the first trigger operation may be used to request the display device to provide a certain function or certain data.
- the first trigger operation may be a trigger operation on the target application of the display device, for example, a trigger operation of clicking an icon of the target application to request the display device to start the target application to provide a certain service.
- the first trigger operation may also be a trigger operation on the target function option of the target application of the display device, for example, clicking the trigger operation of the target function option in the target application to request the target application to start the target function option corresponding to the target function option.
- the specific method of triggering operation is as described above, which may be a contact operation on the display device or a non-contact operation, such as by making certain gestures or inputting voice.
- FIG. 3 shows a schematic flowchart of a response process based on a first trigger operation provided by an embodiment of the present disclosure, including the following steps:
- Step 301 Receive a first trigger operation on the display device.
- Step 302 Based on the preset mapping relationship between the trigger operation on the display device and the action identifier of the interaction object, obtain the action identifier corresponding to the first trigger operation.
- Step 303 Based on the action identifier, the interactive object displayed by the display device is controlled to respond, and the response includes an action corresponding to the action identifier of the interactive object.
- Fig. 4 shows a schematic flow chart of another response process based on a first trigger operation provided by an embodiment of the present disclosure, including the following steps:
- Step 401 Receive a first trigger operation on the display device.
- Step 402 Obtain response data used to respond to the first trigger operation.
- Step 403 Based on the preset mapping relationship between the response data of the trigger operation to the display device and the action identifier of the interaction object, obtain the action identifier corresponding to the response data used to respond to the first trigger operation.
- Step 404 Based on the action identifier, control the interactive object displayed by the display device to respond, and the response includes an action corresponding to the action identifier of the interactive object.
- step 301 and step 401 please refer to the relevant part above, which will not be repeated here.
- the embodiments of the present disclosure can Configure the preset mapping relationship between the first trigger operation and the action identifier of the interactive object.
- the first trigger operation can be a gesture of greeting made by the user, and then you can directly greet the first trigger operation
- a mapping relationship is established between the gesture action of the interactive object and the gesture action of greeting in response to the interactive object, and no additional response data is required.
- a preset mapping relationship between the response data of the first trigger operation and the action identifier of the interaction object may also be configured.
- the user's interaction intention can be recognized through the first trigger operation, and then, according to the interaction intention, response data that meets the interaction intention can be found.
- the response data may be pre-stored, or may be obtained through a network request from other content servers, which is not limited in the present disclosure.
- the data used to respond to the first trigger operation may include response data in addition to the action identifier.
- the response data includes, but is not limited to, the response data of the interactive object for the first trigger operation, and may also include a display device. Upload the response data of the target application for the first trigger operation, etc.
- whether the preset mapping relationship is specifically established between the first trigger operation and the action identifier of the interactive object, or between the response data of the first trigger operation and the action identifier of the interactive object can be based on the requirements of the actual interactive scene. Configuration, and can support the configuration of the above-mentioned two preset mapping relationships under the same interactive scene.
- the response data may include text data
- the preset mapping relationship may include a preset mapping relationship between key text data in the text data and an action identifier.
- the first trigger operation is the trigger operation of the user requesting to start the target application, especially in the interactive scenario where the user starts the target application for the first time
- the interactive device can introduce the instructions for the target application, then the response data It can be an instruction for each function option in the target application.
- the function option can be used as key text data.
- the function option can establish a preset mapping relationship with the action identifier of the interactive object.
- the action identifier of the interactive object is, for example, pointing
- the identification of the physical action of the functional option so that in the process of displaying the response content of the interactive object, the display effect of the interactive object pointing to each introduced functional option can be presented in the interface of the display device.
- the preset mapping relationship may be pre-configured in the background and used to record specific response modes of interactive objects corresponding to different trigger operations, where the specific response modes may be marked by specific action identifiers.
- the specific action identifier can identify the action of the interactive object greeting the target application after the target application is started, or the interactive object points to use during the instructions for the target application.
- the preset mapping relationship can also be obtained after repeated learning based on a deep learning algorithm. In this way, after receiving the first trigger operation or the response data of the first trigger operation, the deep learning model can be used to Predict the action ID of the interactive object mapped.
- the first trigger operation or the response data of the first trigger operation in the preset mapping relationship may have at least one action identifier, that is, one first trigger operation can be mapped to one action identifier.
- At least two action identifiers can be mapped, so that the interactive object can make a specific action corresponding to one action identifier when responding, and can also make a series of specific actions corresponding to at least two action identifiers.
- the at least two action identifiers may have an arrangement relationship.
- the arrangement relationship of the at least two action identifiers is configured according to the order of execution, for example, by adding execution timestamps.
- the action identifier of the interactive object is used to identify a specific action made by the interactive object, so that the interactive object can make an action corresponding to the action identifier in the process of responding by the interactive object.
- the action identification includes the physical action identification of the interactive object, and/or the display position identification of the interactive object when the interactive object performs the action. It can not only use the physical action mark to identify the specific physical movement made by the interactive object during the response process, but also use the display position mark to identify the position of the interactive object displayed in the display area of the display device during the response process. At least one of the above two signs can improve the display effect of the interactive object in the display process.
- the body action identifier is used to identify the specific body action of the interactive object.
- the specific body action may be the movement of the head of the interactive object, or the movement of the body torso of the interactive object, and the movement of the head may also include facial expression actions.
- the placement identifier is used to identify the specific placement of the interactive object when it makes an action.
- the specific placement is a specific placement on the interface of the display device. The specific placement can facilitate the user to view the response content and avoid possible obstruction. And so on, can achieve better interactive effects.
- the action identification includes the physical action identification of the interactive object
- the action corresponding to the action identification includes, but is not limited to, the physical action of the interactive object pointing to the target display area of the display device.
- the directional body action of the interactive object can be applied to a scenario in which related functions on the interface of the display device are described. For example, it can be a scenario in which certain functions on the target application are introduced.
- the first trigger operation received may be a trigger operation on the target application of the display device, or a trigger operation on the target function option of the target application of the display device.
- the user can quickly learn the specific content of the interactive object's current response during the interaction process, making the interaction process more realistic and smooth, and/or the display position identification of the interactive object can be used to identify and make the directional Or other actions during the position, in this way, can facilitate the user to watch the response content, avoid possible problems such as occlusion, and achieve better interactive effects.
- Example 1 In the case where the action identification includes the physical action identification of the interactive object, the action corresponding to the action identification includes the physical action of the interactive object pointing to the target display area of the display device.
- One possible interaction process is:
- the response data corresponding to the first trigger operation can be obtained.
- the response data includes the text data of the function introduction on the target application. Further, based on the response data and the interaction
- the preset mapping relationship between the action identities of the objects can obtain the action identities of the interactive objects corresponding to the key text data in the text data.
- the key text data is, for example, text data describing a specific function on the target application, for example, text data describing the first function on the target application.
- the function options of the first function can be preset on the display interface of the target application. Display area.
- the action identifier of the interactive object may be configured as an action identifier pointing to the preset display area where the first function is located.
- the display device can be controlled to display the content of the response of the interactive object, and the content of the response can include an action pointing to the preset display area where the first function is located.
- the content of the response may also include other forms of content, such as replying to a voice or displaying some prompt content, which is not limited in the present disclosure.
- FIG. 5 it is an example of a display interface displaying the response content of the interactive object.
- the response content of the interactive object includes an action pointing to the "microphone" function, and the corresponding microphone operation instructions are also displayed.
- Example 2 In the case where the action identifier includes the display position identifier of the interactive object when the action is taken, the action corresponding to the action identifier includes the action made by the interactive object on the target display position.
- One possible interaction process is:
- the response data corresponding to the first trigger operation can be obtained.
- the response data includes function-related data provided by the target function option.
- the mapping relationship between the action identifiers can obtain the action identifier of the interaction object corresponding to the first trigger operation.
- the interactive object can be made to perform an action introducing the function provided by the target function option on the target display position. In this way, based on the action identification of the interactive object, the display device can be controlled to display the content of the response of the interactive object, and the content of the response may include the action of the interactive object on the target display position.
- FIG. 6 it is a list of display interfaces displaying the response content of interactive objects.
- the target function option is, for example, the transfer function provided by the transfer option in the target application.
- the target application jumps to the transfer display interface. After that, the response content of the interactive object can be displayed.
- a display location identifier can be configured in the action identifier of the interactive object so that the interactive object can introduce relevant information on the target display location at the bottom left of the transfer operation area.
- the function of transferring money is, for example, the transfer function provided by the transfer option in the target application.
- the target application jumps to the transfer display interface. After that, the response content of the interactive object can be displayed.
- a display location identifier can be configured in the action identifier of the interactive object so that the interactive object can introduce relevant information on the target display location at the bottom left of the transfer operation area. The function of transferring money.
- Example 3 In a case where the action identification includes a physical action identification and a display location identification, the action corresponding to the action identification includes a physical action of the interactive object pointing to the target display area at the target display location.
- One possible interaction process is:
- the configuration of the preset mapping relationship between the display location identifier of the interaction object and the response data can be added.
- the display device can be controlled to display the content that the interactive object responds based on the physical action identification and the display location identification of the interactive object.
- the response content may include the interactive object's corresponding physical action identification on the target display position corresponding to the display location identification Body movements. Among them, you can continue to refer to Figures 5 and 7, which are listed as a display interface displaying the response content of the interactive object.
- the response content of the interactive object includes a pointer to the target display location "B" corresponding to the display location identifier.
- the action of the "microphone" function and also shows the operation instructions of the corresponding microphone.
- the target display area pointed to by the interactive object can be the preset display area where the triggered target function option is located, or the display area associated with the triggered target application, or The display area associated with the triggered target function option.
- the target display position can also be determined based on the triggered target function option and the triggered target application.
- the body action identifier and the display position identifier in the preset mapping relationship can be configured based on specific interaction requirements, which is not limited in the present disclosure.
- the display interface of the target application can be divided into nine square grid forms, including “upper left”, “upper”, “upper right”, “left”, “middle”, “right”, “lower left”, “down” , “Bottom right” these nine display areas.
- Each display area may include different functional options of the target application.
- the physical action identification of the interactive object can include six groups of physical actions, including “upper left”, “left”, “lower left”, and “left” as shown in Figure 9 and Figure 10. "Upper right”, “right”, and “lower right”, the display position identifier of the interactive object may include three display positions "A”, "B”, and "C”.
- the target function options of the target application triggered by the first trigger operation are “upper left”, “left”, “lower left”, “upper right”, “right”, and “lower right”.
- the display location identifier that has a mapping relationship with the first trigger operation or the response data of the first trigger operation can be “B”, and it exists with the first trigger operation or the response data of the first trigger operation
- the body action identifier of the mapping relationship can be one of “upper left”, “left”, “bottom left”, “upper right”, “right”, and “bottom right”, which specifically selects which body action identifier is triggered by the target function option
- the display area is determined. For example, if the display area of the triggered target function option is in the "upper left”, the body action can be selected as “upper right”.
- the target function option of the target application triggered by the first trigger operation is in one of the three display areas “up”, “middle”, and “down”, then it is the same as the first trigger operation or the first trigger operation
- the display position identifier that has a mapping relationship between response data can be “A” or “C”
- the body action identifier that has a mapping relationship with the first trigger operation or the response data of the first trigger operation can be "upper left”, “left”, “ One of the “bottom left”, “top right”, “right”, and “bottom right”, the specific selection of the body action identification is determined by the display area of the triggered target function option.
- the physical action can be selected as “upper left”; or if the display area of the triggered target function option is in “ "Up”, the display position is marked as "C”, and the physical action marked as "upper right” can be selected.
- the key text may be, for example, text used to describe the display area where the target function option is located, and it may be directly Use "above”, “above right” and "below” text with clear orientation information as the key text.
- the name of the target function option can also be used as the key text, among which, the name of the target function option The display area where the target function option is located has been pre-recorded.
- step 203 For the specific processing flow shown in step 203, step 303, and step 404, based on the action identifier, to control the interactive object displayed by the display device to respond, see As shown in Figure 11, it includes the following steps:
- Step 4041 Acquire driving data corresponding to the action identifier.
- the driving data is used to adjust the display state of the interactive object.
- the interactive object is used as an avatar.
- the 3D model or 2D model of the interactive object is recorded in the background. Parameters, in turn, can change the display state of interactive objects.
- the relevant parts include, but are not limited to, the head, various joint parts of the limbs, and facial expressions.
- the action identification can reflect the display state of the interactive object to be presented, so the driving data corresponding to the action identification can be obtained.
- the driving data can be stored in a local database or a cloud database; when the embodiment of the present disclosure is applied to a server, the driving data can be stored in the server itself.
- the storage on the storage unit may also be stored on other service-related servers, which is not limited in the present disclosure.
- Step 4042 Use the driving data to render the display screen of the interactive object.
- a built-in rendering tool can be used to render the display screen of the interactive object.
- the specific rendering tool used is not limited in the present disclosure.
- the display screen may include any of the following screen content: the screen content of the interactive object making a physical action corresponding to the physical action identifier; the interactive object's target display corresponding to the display position identification
- the screen content of the body movement corresponding to the body movement identifier is made in the position.
- screen content refer to Figures 5 to 7.
- Step 4043 Control the display device to display the display screen of the interactive object.
- the display screen can be directly presented on the display device after the local rendering is successful; when the embodiment of the present disclosure is applied to a server, the server can send the successfully rendered display screen to The display device is then presented by the display device.
- the response data includes, but is not limited to, the response data of the interactive object for the first trigger operation, the response data of the target application on the display device for the first trigger operation, and the like.
- the response data corresponding to the first trigger operation can also be obtained, and the display device can be controlled to play the voice data in the response data, and/or the prompt box of the response data can be displayed on the interface of the display device.
- the display device can be controlled to play the voice data in the response data, and/or the prompt box of the response data can be displayed on the interface of the display device.
- This is not limited.
- the foregoing embodiments are not limited to performing a response related to the action identifier of the interactive object, and the response can also be achieved by playing voice or displaying a prompt box, etc., so as to diversify the presentation mode of response data and improve the interactive experience.
- the display device displays the display screen of the interactive object
- the level of the response content of the target application can be at the level of the response content of the interactive object Above.
- the two can respond separately, which can avoid possible conflicts between the response process of the interactive object and the operation of the target application.
- the response content of the interactive object can be set on the background layer of the target application.
- the display device is controlled to display the display screen of the interactive object on the background layer of the display interface of the target application, and the display interface is located between the background layer.
- the display effect can be seen in Figure 12, the display screen of the interactive object is in the background layer, and the response content of the target application can be displayed on the background layer.
- the embodiment of the present disclosure can identify the driving mode of the driving data, and respond to In the driving mode, the control parameters of the interactive object are obtained according to the driving data, and the posture of the interactive object is controlled according to the control parameters.
- a voice data sequence corresponding to the driving data is obtained.
- the voice data sequence includes a plurality of voice data units; if it is detected that the voice data unit includes target data, it is determined that the driving data is driven
- the mode is the first driving mode, and the target data corresponds to the preset control parameters of the interactive object; further, in response to the first driving mode, the preset control parameters corresponding to the target data can be used as the control parameters of the interactive object.
- the target data includes keywords or keywords, and the keywords or keywords correspond to preset control parameters of the set action of the interactive object.
- the driving mode of the driving data is the second driving mode; furthermore, the characteristic information of at least one voice data unit in the voice data sequence may be obtained in response to the second driving mode; The control parameters of the interactive object corresponding to the feature information.
- the aforementioned voice data sequence includes a phoneme sequence
- the phoneme sequence may be feature-encoded to obtain the first coding sequence corresponding to the phoneme sequence
- the feature code corresponding to the at least one phoneme
- the feature code obtain the feature information of the at least one phoneme
- the aforementioned voice data sequence may also include a voice frame sequence.
- the first acoustic feature sequence corresponding to the voice frame sequence may also be acquired.
- An acoustic feature sequence includes an acoustic feature vector corresponding to each voice frame in the voice frame sequence; according to the first acoustic feature sequence, the acoustic feature vector corresponding to at least one voice frame is acquired; according to the acoustic feature Vector to obtain feature information corresponding to the at least one speech frame.
- control parameter of the interactive object includes a control vector of at least one local area of the interactive object.
- the above-mentioned characteristic information may be input to the recurrent neural network to obtain the control parameters of the interactive object corresponding to the characteristic information.
- the foregoing acquiring control parameters of the interactive object according to the driving data may include: acquiring a control vector of at least one local area of the interactive object according to the driving data; controlling the posture of the interactive object according to the control parameter may include: according to the acquired The control vector of the at least one local area controls the facial movements and/or body movements of the interactive object.
- an embodiment of the present disclosure provides an interactive system.
- the interactive system includes a display device 1301 and a server 1302.
- the display device 1301 is configured to obtain the first trigger operation on the display device 1301 and send the first trigger operation to the server 1302, and control the interactive objects displayed by the display device to respond based on the instructions of the server 1302;
- the server 1302 is configured to receive the first trigger operation and obtain the action identifier of the interactive object used to respond to the first trigger operation; based on the action identifier, instruct the display device 1301 to control the interactive object to respond; the response includes the action identifier corresponding to the interactive object Actions.
- the display device 1301 can detect the first trigger operation, and then can request the response data from the server 1302. Accordingly, the server can obtain the response data from a database pre-stored with preset mapping relationships. The action identifier of the interactive object that first triggers the operation, and then based on the action identifier, the driving data of the corresponding interactive object is obtained.
- the server 1302 can render the driving data into an animation of the interactive object through a rendering tool, and then directly deliver the rendered result to the display device 1301, so that the display device 1301 displays the rendered result and presents the interaction An anthropomorphic effect in which the subject responds to the first trigger operation.
- the server 1302 may also send the driving data to the display device 1301, and the display device 1301 renders the driving data through the built-in rendering tool, and then displays the rendered result, showing that the interactive object is the first An anthropomorphic effect that triggers an action to respond.
- the server 1302 in the interactive system can provide the main computing capability, so that the display device 1301 does not need to configure too much processing data locally, which reduces the processing pressure of the display device 1301.
- the interactive object can respond to the user's trigger operation, and the interactive object can respond to the user through anthropomorphic actions, making the interactive process smoother , Can effectively improve the interactive experience.
- an embodiment of the present disclosure provides an interaction device 1400.
- the interaction device 1400 includes a receiving module 1401, an obtaining module 1402, and a control module 1403.
- the receiving module 1401 is configured to receive the first trigger operation on the display device
- the obtaining module 1402 is configured to obtain the action identifier of the interactive object used to respond to the first trigger operation
- the control module 1403 is configured to control the interactive object displayed by the display device to respond based on the action identifier, and the response includes an action corresponding to the action identifier of the interactive object.
- the method when the acquiring module 1402 is used to acquire the action identifier of the interaction object used to respond to the first trigger operation, the method includes:
- the action identifier corresponding to the first trigger operation is obtained.
- the method when the acquiring module 1402 is used to acquire the action identifier of the interaction object used to respond to the first trigger operation, the method includes:
- the action identifier corresponding to the response data used to respond to the first trigger operation is acquired.
- the response data includes text data
- the preset mapping relationship includes a preset mapping relationship between the key text data in the text data and the action identifier.
- the action identifier includes the body action identifier of the interactive object, and/or the display position identifier of the interactive object when the interactive object performs the action.
- the method when the receiving module 1401 is used to receive the first trigger operation on the display device, the method includes:
- the action identification includes the physical action identification of the interactive object
- the action corresponding to the action identification includes the physical action of the interactive object pointing to the target display area of the display device
- the action identifier includes the display position identifier of the interactive object when the action is made
- the action corresponding to the action identifier includes the action made by the interactive object on the target display position
- the action identification includes a physical action identification and a display location identification
- the action corresponding to the action identification includes a physical action of the interactive object pointing to the target display area at the target display location;
- the target display area is a preset display area or a display area associated with the preset display area.
- control module 1403 when the control module 1403 is used to control the interactive object displayed by the display device to respond based on the action identifier, the control module 1403 includes:
- the display screen includes any of the following screen content: the screen content of the interactive object making the physical action corresponding to the physical action identifier; the interactive object identifies the corresponding target display at the display position Make the screen content of the body action corresponding to the body action mark in the position;
- control module 1403 when the control module 1403 is used to control the display device to display the display screen of the interactive object, the control module 1403 includes:
- Control the display device to display the display screen of the interactive object on the background layer of the display interface of the target application, and the display interface is located on the background layer.
- control module 1403 is also used to:
- FIG. 15 is a schematic structural diagram of an electronic device 1500 provided by an embodiment of the disclosure.
- the electronic device 1500 includes: a processor 1501, a memory 1502, and a bus 1503.
- the memory 1502 stores machine-readable instructions executable by the processor 1501 (for example, the receiving module 1401, the obtaining module 1402, the execution instructions corresponding to the control module 1403 in the apparatus in FIG. 14 and so on).
- the processing The device 1501 communicates with the memory 1502 through the bus 1503.
- the processor 1501 executes the following processing: receiving the first trigger operation on the display device; acquiring the interaction used to respond to the first trigger operation The action identification of the object; based on the action identification, the interactive object displayed by the display device is controlled to respond, and the response includes the action corresponding to the action identification of the interactive object.
- the embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, the processor executes the interaction method described in the foregoing method embodiment.
- the storage medium may be a volatile or non-volatile computer-readable storage medium.
- the computer program product of the interaction method provided by the embodiment of the present disclosure includes a computer-readable storage medium storing program code, and the instructions included in the program code can be used to execute the interaction method described in the above method embodiment. For details, see The above method embodiments will not be repeated here.
- the embodiments of the present disclosure also provide a computer program, which when executed by a processor causes the processor to implement any one of the methods in the foregoing embodiments.
- the computer program product can be specifically implemented by hardware, software, or a combination thereof.
- the computer program product is specifically embodied as a computer storage medium.
- the computer program product is specifically embodied as a software product, such as a software development kit (Software Development Kit, SDK) and so on.
- the interaction method, interaction system, device, equipment, and computer program proposed in the embodiments of the present disclosure propose an interaction scheme that can respond to a user's trigger operation by an interaction object, and can use the action identifier corresponding to the first trigger operation to control the interaction
- the object responds to the user through anthropomorphic actions, and the response includes the action corresponding to the action identifier, which can make the interaction process more realistic and smooth, and can effectively enhance the interaction experience.
- the interaction solution provided by the embodiments of the present disclosure can also be applied to a scenario where the interactive object introduces the functions provided by the display device, which can facilitate some users who have weak text understanding or do not have time to view text guidance information. The group quickly obtains the required information.
- the interaction solution provided by the embodiments of the present disclosure can also be applied to other application scenarios with interaction requirements, which is not specifically limited in the present disclosure.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present disclosure.
- the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile computer-readable storage medium executable by a processor.
- the embodiments of the present disclosure are essentially or all or part of the embodiments of the present disclosure can be embodied in the form of a computer software product.
- the computer software product is stored in a storage medium and includes several instructions to make a A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the steps of the methods described in the various embodiments of the present disclosure.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (20)
- 一种交互方法,所述方法包括:接收对显示设备的第一触发操作;获取用于响应所述第一触发操作的交互对象的动作标识;基于所述动作标识,控制所述显示设备展示的交互对象进行响应,所述响应中包括与所述交互对象的动作标识对应的动作。
- 根据权利要求1所述的方法,其中,所述获取用于响应所述第一触发操作的交互对象的动作标识,包括:基于对所述显示设备的触发操作与所述交互对象的动作标识之间的预设映射关系,获取与所述第一触发操作对应的所述动作标识。
- 根据权利要求1所述的方法,其中,所述获取用于响应所述第一触发操作的交互对象的动作标识,包括:获取用于响应所述第一触发操作的响应数据;基于对所述显示设备的触发操作的响应数据与所述交互对象的动作标识之间的预设映射关系,获取用于响应所述第一触发操作的响应数据对应的所述动作标识。
- 根据权利要求3所述的方法,其中,所述响应数据包括文本数据,所述预设映射关系包括所述文本数据中的关键文本数据与动作标识之间的预设映射关系。
- 根据权利要求1至4任一项所述的方法,其中,所述动作标识包括所述交互对象的肢体动作标识,和/或,所述交互对象在做出动作时的展示位置标识。
- 根据权利要求1至5任一项所述的方法,其中,所述接收对显示设备的第一触发操作,包括:接收对所述显示设备的目标应用程序的第一触发操作;或者,接收对所述显示设备的目标应用程序的目标功能选项的第一触发操作,所述目标功能选项位于所述目标应用程序的显示界面中的预设展示区域。
- 根据权利要求5所述的方法,其中,在所述动作标识包括所述交互对象的肢体动作标识的情况下,所述动作标识对应的动作包括所述交互对象指向所述显示设备的目标展示区域的肢体动作;在所述动作标识包括所述交互对象在做出动作时的展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出的动作;在所述动作标识包括所述肢体动作标识和所述展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出指向所述目标展示区域的肢体 动作。
- 根据权利要求1至7任一项所述的方法,其中,基于所述动作标识,控制所述显示设备展示的交互对象进行响应,包括:获取与所述动作标识对应的驱动数据;利用所述驱动数据,渲染所述交互对象的显示画面,所述显示画面中包括以下画面内容中的任一种:所述交互对象做出与肢体动作标识对应的肢体动作的画面内容;所述交互对象在展示位置标识对应的目标展示位置上做出与所述肢体动作标识对应的肢体动作的画面内容;控制所述显示设备显示所述交互对象的显示画面。
- 根据权利要求8所述的方法,其中,所述控制所述显示设备显示所述交互对象的显示画面,包括:控制所述显示设备在目标应用程序的显示界面的背景层上显示所述交互对象的显示画面,所述显示界面位于所述背景层之上。
- 根据权利要求1至9任一项所述的方法,所述方法还包括:获取所述第一触发操作对应的响应数据;控制所述显示设备播放所述响应数据中的语音数据,和/或,在所述显示设备的界面上展示所述响应数据的提示框。
- 一种交互***,包括:显示设备和服务器;其中,所述显示设备,用于获取对显示设备的第一触发操作并将所述第一触发操作发送给所述服务器,并且基于所述服务器的指示控制所述显示设备展示的交互对象进行响应;所述服务器,用于接收所述第一触发操作;获取用于响应所述第一触发操作的交互对象的动作标识;基于所述动作标识,指示所述显示设备控制所述交互对象进行响应,所述响应中包括与所述交互对象的动作标识对应的动作。
- 一种交互装置,所述装置包括:接收模块,用于接收对显示设备的第一触发操作;获取模块,用于获取用于响应所述第一触发操作的交互对象的动作标识;控制模块,用于基于所述动作标识,控制所述显示设备展示的交互对象进行响应,所述响应中包括与所述交互对象的动作标识对应的动作。
- 根据权利要求12所述的装置,其中,所述获取模块在用于获取用于响应所述第一触发操作的交互对象的动作标识时,包括:基于对所述显示设备的触发操作与所述交互对象的动作标识之间的预设映射关系, 获取与所述第一触发操作对应的所述动作标识。
- 根据权利要求12所述的装置,其中,所述获取模块在用于获取用于响应所述第一触发操作的交互对象的动作标识时,包括:获取用于响应所述第一触发操作的响应数据;基于对所述显示设备的触发操作的响应数据与所述交互对象的动作标识之间的预设映射关系,获取用于响应所述第一触发操作的响应数据对应的所述动作标识。
- 根据权利要求14所述的装置,其中,所述响应数据包括文本数据,所述预设映射关系包括所述文本数据中的关键文本数据与动作标识之间的预设映射关系。
- 根据权利要求12至15任一项所述的装置,其中,所述动作标识包括所述交互对象的肢体动作标识,和/或,所述交互对象在做出动作时的展示位置标识。
- 根据权利要求12至16任一项所述的装置,其中,所述接收模块在用于接收对显示设备的第一触发操作时,包括:接收对所述显示设备的目标应用程序的第一触发操作;或者,接收对所述显示设备的目标应用程序的目标功能选项的第一触发操作,所述目标功能选项位于所述目标应用程序的显示界面中的预设展示区域。
- 根据权利要求16所述的装置,其中,在所述动作标识包括所述交互对象的肢体动作标识的情况下,所述动作标识对应的动作包括所述交互对象指向所述显示设备的目标展示区域的肢体动作;在所述动作标识包括所述交互对象在做出动作时的展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出的动作;在所述动作标识包括所述肢体动作标识和所述展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出指向所述目标展示区域的肢体动作。
- 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,其中,所述机器可读指令被所述处理器执行时使所述处理器执行如权利要求1至10任一项所述的交互方法。
- 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,其中,该计算机程序被处理器执行时使所述处理器执行如权利要求1至10任一项所述的交互方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217026797A KR20210129067A (ko) | 2020-04-13 | 2020-11-19 | 상호작용 방법, 장치, 상호작용 장치, 전자 장치 및 저장 매체 |
SG11202109187WA SG11202109187WA (en) | 2020-04-13 | 2020-11-19 | Interaction methods and apparatuses, interaction systems, electronic devices and storage media |
JP2021556975A JP2022532696A (ja) | 2020-04-13 | 2020-11-19 | インタラクション方法、装置、システム、電子デバイス及び記憶媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010285478.9 | 2020-04-13 | ||
CN202010285478.9A CN111488090A (zh) | 2020-04-13 | 2020-04-13 | 交互方法、装置、交互***、电子设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021208432A1 true WO2021208432A1 (zh) | 2021-10-21 |
Family
ID=71791805
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/130092 WO2021208432A1 (zh) | 2020-04-13 | 2020-11-19 | 交互方法、装置、交互***、电子设备及存储介质 |
Country Status (6)
Country | Link |
---|---|
JP (1) | JP2022532696A (zh) |
KR (1) | KR20210129067A (zh) |
CN (1) | CN111488090A (zh) |
SG (1) | SG11202109187WA (zh) |
TW (1) | TW202138971A (zh) |
WO (1) | WO2021208432A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111488090A (zh) * | 2020-04-13 | 2020-08-04 | 北京市商汤科技开发有限公司 | 交互方法、装置、交互***、电子设备及存储介质 |
CN113138765A (zh) * | 2021-05-19 | 2021-07-20 | 北京市商汤科技开发有限公司 | 交互方法、装置、设备以及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103116463A (zh) * | 2013-01-31 | 2013-05-22 | 广东欧珀移动通信有限公司 | 个人数字助理应用的界面控制方法及移动终端 |
CN107085495A (zh) * | 2017-05-23 | 2017-08-22 | 厦门幻世网络科技有限公司 | 一种信息展示方法、电子设备及存储介质 |
CN107294838A (zh) * | 2017-05-24 | 2017-10-24 | 腾讯科技(深圳)有限公司 | 社交应用的动画生成方法、装置、***以及终端 |
CN108491147A (zh) * | 2018-04-16 | 2018-09-04 | 青岛海信移动通信技术股份有限公司 | 一种基于虚拟人物的人机交互方法及移动终端 |
CN111488090A (zh) * | 2020-04-13 | 2020-08-04 | 北京市商汤科技开发有限公司 | 交互方法、装置、交互***、电子设备及存储介质 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006235671A (ja) * | 2005-02-22 | 2006-09-07 | Norinaga Tsukiji | 会話装置及びコンピュータ読み取り可能な記録媒体。 |
JP2009163631A (ja) * | 2008-01-09 | 2009-07-23 | Nippon Telegr & Teleph Corp <Ntt> | バーチャルエージェント制御装置及びそのプログラム |
CN105718133A (zh) * | 2014-12-05 | 2016-06-29 | 珠海金山办公软件有限公司 | 一种引导用户操作的方法及装置 |
JP2017143992A (ja) * | 2016-02-16 | 2017-08-24 | 株式会社トプコン | 眼科検査システム及び眼科検査装置 |
US10685656B2 (en) * | 2016-08-31 | 2020-06-16 | Bose Corporation | Accessing multiple virtual personal assistants (VPA) from a single device |
CN107894833B (zh) * | 2017-10-26 | 2021-06-15 | 北京光年无限科技有限公司 | 基于虚拟人的多模态交互处理方法及*** |
CN110874137B (zh) * | 2018-08-31 | 2023-06-13 | 阿里巴巴集团控股有限公司 | 一种交互方法以及装置 |
CN110125932B (zh) * | 2019-05-06 | 2024-03-19 | 达闼科技(北京)有限公司 | 一种机器人的对话交互方法、机器人及可读存储介质 |
CN110989900B (zh) * | 2019-11-28 | 2021-11-05 | 北京市商汤科技开发有限公司 | 交互对象的驱动方法、装置、设备以及存储介质 |
CN110968194A (zh) * | 2019-11-28 | 2020-04-07 | 北京市商汤科技开发有限公司 | 交互对象的驱动方法、装置、设备以及存储介质 |
CN110868635B (zh) * | 2019-12-04 | 2021-01-12 | 深圳追一科技有限公司 | 视频处理方法、装置、电子设备及存储介质 |
-
2020
- 2020-04-13 CN CN202010285478.9A patent/CN111488090A/zh active Pending
- 2020-11-19 JP JP2021556975A patent/JP2022532696A/ja active Pending
- 2020-11-19 SG SG11202109187WA patent/SG11202109187WA/en unknown
- 2020-11-19 KR KR1020217026797A patent/KR20210129067A/ko not_active Application Discontinuation
- 2020-11-19 WO PCT/CN2020/130092 patent/WO2021208432A1/zh active Application Filing
- 2020-12-21 TW TW109145339A patent/TW202138971A/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103116463A (zh) * | 2013-01-31 | 2013-05-22 | 广东欧珀移动通信有限公司 | 个人数字助理应用的界面控制方法及移动终端 |
CN107085495A (zh) * | 2017-05-23 | 2017-08-22 | 厦门幻世网络科技有限公司 | 一种信息展示方法、电子设备及存储介质 |
CN107294838A (zh) * | 2017-05-24 | 2017-10-24 | 腾讯科技(深圳)有限公司 | 社交应用的动画生成方法、装置、***以及终端 |
CN108491147A (zh) * | 2018-04-16 | 2018-09-04 | 青岛海信移动通信技术股份有限公司 | 一种基于虚拟人物的人机交互方法及移动终端 |
CN111488090A (zh) * | 2020-04-13 | 2020-08-04 | 北京市商汤科技开发有限公司 | 交互方法、装置、交互***、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP2022532696A (ja) | 2022-07-19 |
SG11202109187WA (en) | 2021-11-29 |
KR20210129067A (ko) | 2021-10-27 |
TW202138971A (zh) | 2021-10-16 |
CN111488090A (zh) | 2020-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7411133B2 (ja) | 仮想現実ディスプレイシステム、拡張現実ディスプレイシステム、および複合現実ディスプレイシステムのためのキーボード | |
US11043031B2 (en) | Content display property management | |
JP6013583B2 (ja) | 有効インターフェース要素の強調のための方式 | |
US11615592B2 (en) | Side-by-side character animation from realtime 3D body motion capture | |
KR20210046591A (ko) | 증강 현실 데이터 제시 방법, 장치, 전자 기기 및 저장 매체 | |
CN108273265A (zh) | 虚拟对象的显示方法及装置 | |
CN111158469A (zh) | 视角切换方法、装置、终端设备及存储介质 | |
US10955929B2 (en) | Artificial reality system having a digit-mapped self-haptic input method | |
KR20230022269A (ko) | 증강 현실 데이터 제시 방법, 장치, 전자 기기 및 저장 매체 | |
WO2021208432A1 (zh) | 交互方法、装置、交互***、电子设备及存储介质 | |
CN111771180A (zh) | 增强现实环境中对象的混合放置 | |
US11367416B1 (en) | Presenting computer-generated content associated with reading content based on user interactions | |
CN111971714A (zh) | 增强现实环境中的加载指示器 | |
WO2019166005A1 (zh) | 智能终端及其感控方法、具有存储功能的装置 | |
CN103752010B (zh) | 用于控制设备的增强现实覆盖 | |
WO2020201998A1 (en) | Transitioning between an augmented reality scene and a virtual reality representation | |
CN110609615A (zh) | 用于在增强现实中集成触觉覆盖的***和方法 | |
KR102587645B1 (ko) | 터치스크린 제스처를 사용하여 정밀 포지셔닝하기 위한 시스템 및 방법 | |
CN112717409B (zh) | 虚拟车辆控制方法、装置、计算机设备及存储介质 | |
US9041669B1 (en) | Input/output device | |
US11948237B2 (en) | System and method for mimicking user handwriting or other user input using an avatar | |
TWI799195B (zh) | 利用虛擬物件實現第三人稱視角的方法與系統 | |
US11934627B1 (en) | 3D user interface with sliding cylindrical volumes | |
US20230410441A1 (en) | Generating user interfaces displaying augmented reality graphics | |
CN114904279A (zh) | 数据预处理方法、装置、介质及设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021556975 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20930989 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20930989 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 521430712 Country of ref document: SA |