WO2021208432A1 - 交互方法、装置、交互***、电子设备及存储介质 - Google Patents

交互方法、装置、交互***、电子设备及存储介质 Download PDF

Info

Publication number
WO2021208432A1
WO2021208432A1 PCT/CN2020/130092 CN2020130092W WO2021208432A1 WO 2021208432 A1 WO2021208432 A1 WO 2021208432A1 CN 2020130092 W CN2020130092 W CN 2020130092W WO 2021208432 A1 WO2021208432 A1 WO 2021208432A1
Authority
WO
WIPO (PCT)
Prior art keywords
action
interactive object
trigger operation
display device
display
Prior art date
Application number
PCT/CN2020/130092
Other languages
English (en)
French (fr)
Inventor
张子隆
许亲亲
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to KR1020217026797A priority Critical patent/KR20210129067A/ko
Priority to SG11202109187WA priority patent/SG11202109187WA/en
Priority to JP2021556975A priority patent/JP2022532696A/ja
Publication of WO2021208432A1 publication Critical patent/WO2021208432A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to an interaction method, device, interaction system, electronic equipment, and storage medium.
  • the way of human-computer interaction is mostly: the user inputs based on keys, touch, and voice, and the device responds by presenting images, texts or virtual characters on the display screen. At present, most virtual characters are improved on the basis of voice assistants, and the interaction between users and virtual characters is still on the surface.
  • the embodiments of the present disclosure provide at least one interaction method, device, interaction system, electronic equipment, and storage medium.
  • embodiments of the present disclosure provide an interaction method, the method including:
  • the interactive object displayed by the display device is controlled to respond, and the response includes an action corresponding to the action identifier of the interactive object.
  • the embodiment of the present disclosure proposes a solution capable of responding to a user's trigger operation by an interactive object.
  • the action identifier corresponding to the first trigger operation can be used to control the interactive object to respond to the user through an anthropomorphic action, and the response includes
  • the action corresponding to the action mark can make the interaction process more realistic and smooth, and can effectively improve the interaction experience.
  • the embodiments of the present disclosure can also be applied to scenarios where interactive objects introduce the functions provided by the display device, which can facilitate some user groups who have weak text understanding or do not have time to view text guidance information to quickly obtain what they need. information.
  • the obtaining the action identifier of the interaction object used to respond to the first trigger operation includes:
  • the action identifier corresponding to the first trigger operation is acquired.
  • the obtaining the action identifier of the interaction object used to respond to the first trigger operation includes:
  • the action identifier corresponding to the response data used to respond to the first trigger operation is acquired.
  • the response data includes text data
  • the preset mapping relationship includes a preset mapping relationship between key text data and action identifiers in the text data.
  • the action used to respond to the first trigger operation can be found quickly and accurately ID, so as to control the interactive object to make an action corresponding to the action ID in response to the first trigger operation.
  • the action identifier includes a physical action identifier of the interactive object, and/or a display position identifier of the interactive object when the interactive object performs an action.
  • the physical action identification can be used to identify the specific physical action that the interactive object makes during the response process
  • the display position identification can be used to identify that the interactive object is displayed in the display area of the display device during the response process. At least one of the above two identifiers can improve the display effect of interactive objects in the display process.
  • the receiving the first trigger operation on the display device includes:
  • the action identifier includes the physical action identifier of the interactive object
  • the action corresponding to the action identifier includes the body of the interactive object pointing to the target display area of the display device action
  • the action identifier includes the display position identifier of the interactive object when the action is performed
  • the action corresponding to the action identifier includes the action performed by the interactive object on the target display position
  • the action corresponding to the action identification includes the physical action of the interactive object pointing to the target display area at the target display position.
  • the body action identifier of the interactive object may be a directional action identifier, such as pointing to a specific area, so that the user can quickly learn the specific content of the interactive object currently responding during the interaction, so that the interaction
  • the process is more realistic and smooth, and/or the display position identification of the interactive object can be a directional action or other action that identifies the interactive object on the target display position, so that the user can watch the response content easily and avoid possible occlusion And so on, can achieve better interactive effects.
  • controlling the interactive object displayed by the display device to respond includes:
  • the display screen of the interactive object is rendered, and the display screen includes any one of the following screen contents: the screen content of the interactive object making a physical movement corresponding to the physical movement identifier;
  • the interactive object makes the screen content of the body motion corresponding to the body motion identifier on the target display position corresponding to the display position identifier;
  • controlling the display device to display the display screen of the interactive object includes:
  • Control the display device to display the display screen of the interactive object on the background layer of the display interface of the target application, and the display interface is located on the background layer.
  • the two can respond separately, which can avoid possible conflicts between the response process of the interactive object and the operation of the target application.
  • the method further includes:
  • Control the display device to play the voice data in the response data, and/or display a prompt box of the response data on the interface of the display device.
  • the response is not limited to the response related to the action identifier of the interactive object, and the response can also be achieved by playing voice or displaying a prompt box, so that the presentation mode of response data is diversified and the interactive experience is improved.
  • embodiments of the present disclosure provide an interactive system, including: a display device and a server;
  • the display device is configured to obtain a first trigger operation on the display device and send the first trigger operation to the server, and control the interactive object displayed by the display device to respond based on the server's instruction;
  • the server is configured to receive the first trigger operation; obtain an action identifier of an interactive object used to respond to the first trigger operation; based on the action identifier, instruct the display device to control the interactive object to respond, The response includes an action corresponding to the action identifier of the interaction object.
  • an interaction device including:
  • the receiving module is used to receive the first trigger operation on the display device
  • An obtaining module configured to obtain an action identifier of an interactive object used to respond to the first trigger operation
  • the control module is configured to control the interactive object displayed by the display device to respond based on the action identifier, and the response includes an action corresponding to the action identifier of the interactive object.
  • the method when the acquiring module is used to acquire the action identifier of the interaction object used to respond to the first trigger operation, the method includes:
  • the action identifier corresponding to the first trigger operation is acquired.
  • the method when the acquiring module is used to acquire the action identifier of the interaction object used to respond to the first trigger operation, the method includes:
  • the action identifier corresponding to the response data used to respond to the first trigger operation is acquired.
  • the response data includes text data
  • the preset mapping relationship includes a preset mapping relationship between key text data and action identifiers in the text data.
  • the action identifier includes a physical action identifier of the interactive object, and/or a display position identifier of the interactive object when the interactive object performs an action.
  • the method when the receiving module is used to receive the first trigger operation on the display device, the method includes:
  • the action identifier includes the physical action identifier of the interactive object
  • the action corresponding to the action identifier includes the body of the interactive object pointing to the target display area of the display device action
  • the action identifier includes the display position identifier of the interactive object when the action is taken
  • the action corresponding to the action identifier includes the action made by the interactive object on the target display position
  • the action corresponding to the action identifier includes the physical movement of the interactive object pointing to the target display area at the target display position;
  • the target display area is the preset display area or a display area associated with the preset display area.
  • control module when the control module is configured to control the interactive object displayed by the display device to respond based on the action identifier, the control module includes:
  • the display screen of the interactive object is rendered, and the display screen includes any one of the following screen contents: the screen content of the interactive object making a physical movement corresponding to the physical movement identifier;
  • the interactive object makes the screen content of the body motion corresponding to the body motion identifier on the target display position corresponding to the display position identifier;
  • control module when the control module is used to control the display device to display the display screen of the interactive object, the control module includes:
  • Control the display device to display the display screen of the interactive object on the background layer of the display interface of the target application, and the display interface is located on the background layer.
  • control module is further used to:
  • Control the display device to play the voice data in the response data, and/or display a prompt box of the response data on the interface of the display device.
  • an embodiment of the present disclosure provides an electronic device, including a processor, a memory, and a bus.
  • the memory stores machine-readable instructions executable by the processor.
  • the processing communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the processor executes the interaction method as described in the first aspect.
  • embodiments of the present disclosure provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium.
  • the processor executes the Interactive method.
  • FIG. 1 shows a schematic diagram of a display device provided by an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of an interaction method provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic flowchart of a response process based on a first trigger operation provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic flowchart of another response process based on a first trigger operation provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of the first display interface displaying response content of interactive objects provided by an embodiment of the present disclosure
  • FIG. 6 shows a schematic diagram of a second display interface displaying response content of interactive objects provided by an embodiment of the present disclosure
  • FIG. 7 shows a schematic diagram of a third display interface displaying response content of interactive objects provided by an embodiment of the present disclosure
  • FIG. 8 shows a schematic diagram of a nine-square grid of a display interface provided by an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of an interface of an interactive object provided by an embodiment of the present disclosure.
  • FIG. 10 shows a schematic diagram of an interface of another interactive object provided by an embodiment of the present disclosure.
  • FIG. 11 shows a schematic diagram of a specific processing flow for controlling an interactive object displayed by a display device to respond according to an embodiment of the present disclosure
  • FIG. 12 shows a schematic diagram of a display screen of an interactive object provided by an embodiment of the present disclosure
  • FIG. 13 shows a schematic structural diagram of an interactive system provided by an embodiment of the present disclosure
  • FIG. 14 shows a schematic structural diagram of an interactive device provided by an embodiment of the present disclosure
  • FIG. 15 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • At least one embodiment of the present disclosure provides an interactive method that can be executed by electronic devices such as a display device or a server.
  • the display device is, for example, a terminal device. Game consoles, desktops, advertising machines, vehicle-mounted terminals, virtual reality (VR) equipment, augmented reality (Augmented Reality, AR) equipment, etc.
  • the server may include a local server or a cloud server. The present disclosure does not limit the specific form of the display device and the server.
  • the interactive object can be displayed on the display device.
  • the interactive object can be any interactive object that can interact with the target object. It can be a virtual character, a virtual animal, a virtual item, or a cartoon image. Such as any avatar that can implement interactive functions, the presentation form of the avatar can be either 2D or 3D, which is not limited in the present disclosure.
  • the target object can be a natural person, a robot, or other smart devices.
  • the interaction mode between the interaction object and the target object can be an active interaction mode or a passive interaction mode.
  • the target object can make a demand by making gestures or body movements, and trigger the interactive object to interact with it by means of active interaction.
  • the interactive object may actively greet the target object, prompt the target object to make an action, etc., so that the target object interacts with the interactive object in a passive manner.
  • FIG. 1 shows a display device proposed by at least one embodiment of the present disclosure.
  • the display device is a display device with a transparent display screen, which can display a three-dimensional picture on the transparent display screen to present an interactive object with a three-dimensional effect.
  • the interactive objects displayed on the transparent display screen in Figure 1 are virtual cartoon characters.
  • the display device may also be a mobile terminal such as a mobile phone or a tablet computer, and an application program (APP) capable of displaying interactive objects may be installed on the mobile terminal, for example, to realize the interaction between the interactive object and the user.
  • APP application program
  • it may be a general-purpose application configured with a software development kit (SDK) that realizes the interactive capabilities of interactive objects.
  • SDK software development kit
  • some bank apps SDKs that realize the interactive capabilities of interactive objects can be embedded, and then in the case of running the bank APP, interactive objects can be called to achieve interaction with users according to requirements.
  • the above-mentioned display device may be configured with a memory and a processor.
  • the memory is used to store computer instructions that can run on the processor.
  • the interactive object displayed on the screen responds to the target object.
  • the embodiment of the present disclosure proposes a solution capable of responding to a user's trigger operation by an interactive object.
  • the interactive object can respond to the user through anthropomorphic actions, so that the interaction process is smoother and the interaction experience can be effectively improved.
  • the embodiments of the present disclosure can also be applied to scenarios where interactive objects introduce the functions provided by the display device, which can facilitate some user groups who have weak text understanding or do not have time to view text guidance information to quickly obtain what they need. information.
  • Fig. 2 is a flowchart of an interaction method provided by an embodiment of the present disclosure.
  • the interaction method includes steps 201 to 203, wherein:
  • Step 201 Receive a first trigger operation on the display device.
  • Step 202 Obtain an action identifier of an interactive object used to respond to the first trigger operation.
  • Step 203 Based on the action identifier, control the interactive object displayed by the display device to respond, and the response includes an action corresponding to the action identifier of the interactive object.
  • the foregoing interaction method may be executed by the display device, that is, the response to the first trigger operation is completed locally; the foregoing interaction method may also be executed by the server, that is, the server may complete the data response to the first trigger operation. Obtain and instruct the interactive object of the display device to respond.
  • the first trigger operation on the display device is received in step 201, and the display device may detect whether the first trigger operation exists. For example, it is possible to determine whether there is a first trigger operation by detecting whether there is a touch operation on the display screen of the display device; or, by detecting whether there is a set user facial expression or user body movement in the image collected by the display device, It is determined whether there is a first trigger operation; or, by detecting a voice instruction collected by the display device, it is determined whether there is a first trigger operation, etc.
  • the specific detection method may be configured by the detection capability supported by the sensor configured in the display device, which is not limited in the present disclosure.
  • the display device can report the detected first trigger operation to the server, so that the server can obtain various types of data used to respond to the first trigger operation, and then display the information required by the display device.
  • the data is sent to the display device, and the display device displays the response of the interactive object.
  • the first trigger operation may be used to request the display device to provide a certain function or certain data.
  • the first trigger operation may be a trigger operation on the target application of the display device, for example, a trigger operation of clicking an icon of the target application to request the display device to start the target application to provide a certain service.
  • the first trigger operation may also be a trigger operation on the target function option of the target application of the display device, for example, clicking the trigger operation of the target function option in the target application to request the target application to start the target function option corresponding to the target function option.
  • the specific method of triggering operation is as described above, which may be a contact operation on the display device or a non-contact operation, such as by making certain gestures or inputting voice.
  • FIG. 3 shows a schematic flowchart of a response process based on a first trigger operation provided by an embodiment of the present disclosure, including the following steps:
  • Step 301 Receive a first trigger operation on the display device.
  • Step 302 Based on the preset mapping relationship between the trigger operation on the display device and the action identifier of the interaction object, obtain the action identifier corresponding to the first trigger operation.
  • Step 303 Based on the action identifier, the interactive object displayed by the display device is controlled to respond, and the response includes an action corresponding to the action identifier of the interactive object.
  • Fig. 4 shows a schematic flow chart of another response process based on a first trigger operation provided by an embodiment of the present disclosure, including the following steps:
  • Step 401 Receive a first trigger operation on the display device.
  • Step 402 Obtain response data used to respond to the first trigger operation.
  • Step 403 Based on the preset mapping relationship between the response data of the trigger operation to the display device and the action identifier of the interaction object, obtain the action identifier corresponding to the response data used to respond to the first trigger operation.
  • Step 404 Based on the action identifier, control the interactive object displayed by the display device to respond, and the response includes an action corresponding to the action identifier of the interactive object.
  • step 301 and step 401 please refer to the relevant part above, which will not be repeated here.
  • the embodiments of the present disclosure can Configure the preset mapping relationship between the first trigger operation and the action identifier of the interactive object.
  • the first trigger operation can be a gesture of greeting made by the user, and then you can directly greet the first trigger operation
  • a mapping relationship is established between the gesture action of the interactive object and the gesture action of greeting in response to the interactive object, and no additional response data is required.
  • a preset mapping relationship between the response data of the first trigger operation and the action identifier of the interaction object may also be configured.
  • the user's interaction intention can be recognized through the first trigger operation, and then, according to the interaction intention, response data that meets the interaction intention can be found.
  • the response data may be pre-stored, or may be obtained through a network request from other content servers, which is not limited in the present disclosure.
  • the data used to respond to the first trigger operation may include response data in addition to the action identifier.
  • the response data includes, but is not limited to, the response data of the interactive object for the first trigger operation, and may also include a display device. Upload the response data of the target application for the first trigger operation, etc.
  • whether the preset mapping relationship is specifically established between the first trigger operation and the action identifier of the interactive object, or between the response data of the first trigger operation and the action identifier of the interactive object can be based on the requirements of the actual interactive scene. Configuration, and can support the configuration of the above-mentioned two preset mapping relationships under the same interactive scene.
  • the response data may include text data
  • the preset mapping relationship may include a preset mapping relationship between key text data in the text data and an action identifier.
  • the first trigger operation is the trigger operation of the user requesting to start the target application, especially in the interactive scenario where the user starts the target application for the first time
  • the interactive device can introduce the instructions for the target application, then the response data It can be an instruction for each function option in the target application.
  • the function option can be used as key text data.
  • the function option can establish a preset mapping relationship with the action identifier of the interactive object.
  • the action identifier of the interactive object is, for example, pointing
  • the identification of the physical action of the functional option so that in the process of displaying the response content of the interactive object, the display effect of the interactive object pointing to each introduced functional option can be presented in the interface of the display device.
  • the preset mapping relationship may be pre-configured in the background and used to record specific response modes of interactive objects corresponding to different trigger operations, where the specific response modes may be marked by specific action identifiers.
  • the specific action identifier can identify the action of the interactive object greeting the target application after the target application is started, or the interactive object points to use during the instructions for the target application.
  • the preset mapping relationship can also be obtained after repeated learning based on a deep learning algorithm. In this way, after receiving the first trigger operation or the response data of the first trigger operation, the deep learning model can be used to Predict the action ID of the interactive object mapped.
  • the first trigger operation or the response data of the first trigger operation in the preset mapping relationship may have at least one action identifier, that is, one first trigger operation can be mapped to one action identifier.
  • At least two action identifiers can be mapped, so that the interactive object can make a specific action corresponding to one action identifier when responding, and can also make a series of specific actions corresponding to at least two action identifiers.
  • the at least two action identifiers may have an arrangement relationship.
  • the arrangement relationship of the at least two action identifiers is configured according to the order of execution, for example, by adding execution timestamps.
  • the action identifier of the interactive object is used to identify a specific action made by the interactive object, so that the interactive object can make an action corresponding to the action identifier in the process of responding by the interactive object.
  • the action identification includes the physical action identification of the interactive object, and/or the display position identification of the interactive object when the interactive object performs the action. It can not only use the physical action mark to identify the specific physical movement made by the interactive object during the response process, but also use the display position mark to identify the position of the interactive object displayed in the display area of the display device during the response process. At least one of the above two signs can improve the display effect of the interactive object in the display process.
  • the body action identifier is used to identify the specific body action of the interactive object.
  • the specific body action may be the movement of the head of the interactive object, or the movement of the body torso of the interactive object, and the movement of the head may also include facial expression actions.
  • the placement identifier is used to identify the specific placement of the interactive object when it makes an action.
  • the specific placement is a specific placement on the interface of the display device. The specific placement can facilitate the user to view the response content and avoid possible obstruction. And so on, can achieve better interactive effects.
  • the action identification includes the physical action identification of the interactive object
  • the action corresponding to the action identification includes, but is not limited to, the physical action of the interactive object pointing to the target display area of the display device.
  • the directional body action of the interactive object can be applied to a scenario in which related functions on the interface of the display device are described. For example, it can be a scenario in which certain functions on the target application are introduced.
  • the first trigger operation received may be a trigger operation on the target application of the display device, or a trigger operation on the target function option of the target application of the display device.
  • the user can quickly learn the specific content of the interactive object's current response during the interaction process, making the interaction process more realistic and smooth, and/or the display position identification of the interactive object can be used to identify and make the directional Or other actions during the position, in this way, can facilitate the user to watch the response content, avoid possible problems such as occlusion, and achieve better interactive effects.
  • Example 1 In the case where the action identification includes the physical action identification of the interactive object, the action corresponding to the action identification includes the physical action of the interactive object pointing to the target display area of the display device.
  • One possible interaction process is:
  • the response data corresponding to the first trigger operation can be obtained.
  • the response data includes the text data of the function introduction on the target application. Further, based on the response data and the interaction
  • the preset mapping relationship between the action identities of the objects can obtain the action identities of the interactive objects corresponding to the key text data in the text data.
  • the key text data is, for example, text data describing a specific function on the target application, for example, text data describing the first function on the target application.
  • the function options of the first function can be preset on the display interface of the target application. Display area.
  • the action identifier of the interactive object may be configured as an action identifier pointing to the preset display area where the first function is located.
  • the display device can be controlled to display the content of the response of the interactive object, and the content of the response can include an action pointing to the preset display area where the first function is located.
  • the content of the response may also include other forms of content, such as replying to a voice or displaying some prompt content, which is not limited in the present disclosure.
  • FIG. 5 it is an example of a display interface displaying the response content of the interactive object.
  • the response content of the interactive object includes an action pointing to the "microphone" function, and the corresponding microphone operation instructions are also displayed.
  • Example 2 In the case where the action identifier includes the display position identifier of the interactive object when the action is taken, the action corresponding to the action identifier includes the action made by the interactive object on the target display position.
  • One possible interaction process is:
  • the response data corresponding to the first trigger operation can be obtained.
  • the response data includes function-related data provided by the target function option.
  • the mapping relationship between the action identifiers can obtain the action identifier of the interaction object corresponding to the first trigger operation.
  • the interactive object can be made to perform an action introducing the function provided by the target function option on the target display position. In this way, based on the action identification of the interactive object, the display device can be controlled to display the content of the response of the interactive object, and the content of the response may include the action of the interactive object on the target display position.
  • FIG. 6 it is a list of display interfaces displaying the response content of interactive objects.
  • the target function option is, for example, the transfer function provided by the transfer option in the target application.
  • the target application jumps to the transfer display interface. After that, the response content of the interactive object can be displayed.
  • a display location identifier can be configured in the action identifier of the interactive object so that the interactive object can introduce relevant information on the target display location at the bottom left of the transfer operation area.
  • the function of transferring money is, for example, the transfer function provided by the transfer option in the target application.
  • the target application jumps to the transfer display interface. After that, the response content of the interactive object can be displayed.
  • a display location identifier can be configured in the action identifier of the interactive object so that the interactive object can introduce relevant information on the target display location at the bottom left of the transfer operation area. The function of transferring money.
  • Example 3 In a case where the action identification includes a physical action identification and a display location identification, the action corresponding to the action identification includes a physical action of the interactive object pointing to the target display area at the target display location.
  • One possible interaction process is:
  • the configuration of the preset mapping relationship between the display location identifier of the interaction object and the response data can be added.
  • the display device can be controlled to display the content that the interactive object responds based on the physical action identification and the display location identification of the interactive object.
  • the response content may include the interactive object's corresponding physical action identification on the target display position corresponding to the display location identification Body movements. Among them, you can continue to refer to Figures 5 and 7, which are listed as a display interface displaying the response content of the interactive object.
  • the response content of the interactive object includes a pointer to the target display location "B" corresponding to the display location identifier.
  • the action of the "microphone" function and also shows the operation instructions of the corresponding microphone.
  • the target display area pointed to by the interactive object can be the preset display area where the triggered target function option is located, or the display area associated with the triggered target application, or The display area associated with the triggered target function option.
  • the target display position can also be determined based on the triggered target function option and the triggered target application.
  • the body action identifier and the display position identifier in the preset mapping relationship can be configured based on specific interaction requirements, which is not limited in the present disclosure.
  • the display interface of the target application can be divided into nine square grid forms, including “upper left”, “upper”, “upper right”, “left”, “middle”, “right”, “lower left”, “down” , “Bottom right” these nine display areas.
  • Each display area may include different functional options of the target application.
  • the physical action identification of the interactive object can include six groups of physical actions, including “upper left”, “left”, “lower left”, and “left” as shown in Figure 9 and Figure 10. "Upper right”, “right”, and “lower right”, the display position identifier of the interactive object may include three display positions "A”, "B”, and "C”.
  • the target function options of the target application triggered by the first trigger operation are “upper left”, “left”, “lower left”, “upper right”, “right”, and “lower right”.
  • the display location identifier that has a mapping relationship with the first trigger operation or the response data of the first trigger operation can be “B”, and it exists with the first trigger operation or the response data of the first trigger operation
  • the body action identifier of the mapping relationship can be one of “upper left”, “left”, “bottom left”, “upper right”, “right”, and “bottom right”, which specifically selects which body action identifier is triggered by the target function option
  • the display area is determined. For example, if the display area of the triggered target function option is in the "upper left”, the body action can be selected as “upper right”.
  • the target function option of the target application triggered by the first trigger operation is in one of the three display areas “up”, “middle”, and “down”, then it is the same as the first trigger operation or the first trigger operation
  • the display position identifier that has a mapping relationship between response data can be “A” or “C”
  • the body action identifier that has a mapping relationship with the first trigger operation or the response data of the first trigger operation can be "upper left”, “left”, “ One of the “bottom left”, “top right”, “right”, and “bottom right”, the specific selection of the body action identification is determined by the display area of the triggered target function option.
  • the physical action can be selected as “upper left”; or if the display area of the triggered target function option is in “ "Up”, the display position is marked as "C”, and the physical action marked as "upper right” can be selected.
  • the key text may be, for example, text used to describe the display area where the target function option is located, and it may be directly Use "above”, “above right” and "below” text with clear orientation information as the key text.
  • the name of the target function option can also be used as the key text, among which, the name of the target function option The display area where the target function option is located has been pre-recorded.
  • step 203 For the specific processing flow shown in step 203, step 303, and step 404, based on the action identifier, to control the interactive object displayed by the display device to respond, see As shown in Figure 11, it includes the following steps:
  • Step 4041 Acquire driving data corresponding to the action identifier.
  • the driving data is used to adjust the display state of the interactive object.
  • the interactive object is used as an avatar.
  • the 3D model or 2D model of the interactive object is recorded in the background. Parameters, in turn, can change the display state of interactive objects.
  • the relevant parts include, but are not limited to, the head, various joint parts of the limbs, and facial expressions.
  • the action identification can reflect the display state of the interactive object to be presented, so the driving data corresponding to the action identification can be obtained.
  • the driving data can be stored in a local database or a cloud database; when the embodiment of the present disclosure is applied to a server, the driving data can be stored in the server itself.
  • the storage on the storage unit may also be stored on other service-related servers, which is not limited in the present disclosure.
  • Step 4042 Use the driving data to render the display screen of the interactive object.
  • a built-in rendering tool can be used to render the display screen of the interactive object.
  • the specific rendering tool used is not limited in the present disclosure.
  • the display screen may include any of the following screen content: the screen content of the interactive object making a physical action corresponding to the physical action identifier; the interactive object's target display corresponding to the display position identification
  • the screen content of the body movement corresponding to the body movement identifier is made in the position.
  • screen content refer to Figures 5 to 7.
  • Step 4043 Control the display device to display the display screen of the interactive object.
  • the display screen can be directly presented on the display device after the local rendering is successful; when the embodiment of the present disclosure is applied to a server, the server can send the successfully rendered display screen to The display device is then presented by the display device.
  • the response data includes, but is not limited to, the response data of the interactive object for the first trigger operation, the response data of the target application on the display device for the first trigger operation, and the like.
  • the response data corresponding to the first trigger operation can also be obtained, and the display device can be controlled to play the voice data in the response data, and/or the prompt box of the response data can be displayed on the interface of the display device.
  • the display device can be controlled to play the voice data in the response data, and/or the prompt box of the response data can be displayed on the interface of the display device.
  • This is not limited.
  • the foregoing embodiments are not limited to performing a response related to the action identifier of the interactive object, and the response can also be achieved by playing voice or displaying a prompt box, etc., so as to diversify the presentation mode of response data and improve the interactive experience.
  • the display device displays the display screen of the interactive object
  • the level of the response content of the target application can be at the level of the response content of the interactive object Above.
  • the two can respond separately, which can avoid possible conflicts between the response process of the interactive object and the operation of the target application.
  • the response content of the interactive object can be set on the background layer of the target application.
  • the display device is controlled to display the display screen of the interactive object on the background layer of the display interface of the target application, and the display interface is located between the background layer.
  • the display effect can be seen in Figure 12, the display screen of the interactive object is in the background layer, and the response content of the target application can be displayed on the background layer.
  • the embodiment of the present disclosure can identify the driving mode of the driving data, and respond to In the driving mode, the control parameters of the interactive object are obtained according to the driving data, and the posture of the interactive object is controlled according to the control parameters.
  • a voice data sequence corresponding to the driving data is obtained.
  • the voice data sequence includes a plurality of voice data units; if it is detected that the voice data unit includes target data, it is determined that the driving data is driven
  • the mode is the first driving mode, and the target data corresponds to the preset control parameters of the interactive object; further, in response to the first driving mode, the preset control parameters corresponding to the target data can be used as the control parameters of the interactive object.
  • the target data includes keywords or keywords, and the keywords or keywords correspond to preset control parameters of the set action of the interactive object.
  • the driving mode of the driving data is the second driving mode; furthermore, the characteristic information of at least one voice data unit in the voice data sequence may be obtained in response to the second driving mode; The control parameters of the interactive object corresponding to the feature information.
  • the aforementioned voice data sequence includes a phoneme sequence
  • the phoneme sequence may be feature-encoded to obtain the first coding sequence corresponding to the phoneme sequence
  • the feature code corresponding to the at least one phoneme
  • the feature code obtain the feature information of the at least one phoneme
  • the aforementioned voice data sequence may also include a voice frame sequence.
  • the first acoustic feature sequence corresponding to the voice frame sequence may also be acquired.
  • An acoustic feature sequence includes an acoustic feature vector corresponding to each voice frame in the voice frame sequence; according to the first acoustic feature sequence, the acoustic feature vector corresponding to at least one voice frame is acquired; according to the acoustic feature Vector to obtain feature information corresponding to the at least one speech frame.
  • control parameter of the interactive object includes a control vector of at least one local area of the interactive object.
  • the above-mentioned characteristic information may be input to the recurrent neural network to obtain the control parameters of the interactive object corresponding to the characteristic information.
  • the foregoing acquiring control parameters of the interactive object according to the driving data may include: acquiring a control vector of at least one local area of the interactive object according to the driving data; controlling the posture of the interactive object according to the control parameter may include: according to the acquired The control vector of the at least one local area controls the facial movements and/or body movements of the interactive object.
  • an embodiment of the present disclosure provides an interactive system.
  • the interactive system includes a display device 1301 and a server 1302.
  • the display device 1301 is configured to obtain the first trigger operation on the display device 1301 and send the first trigger operation to the server 1302, and control the interactive objects displayed by the display device to respond based on the instructions of the server 1302;
  • the server 1302 is configured to receive the first trigger operation and obtain the action identifier of the interactive object used to respond to the first trigger operation; based on the action identifier, instruct the display device 1301 to control the interactive object to respond; the response includes the action identifier corresponding to the interactive object Actions.
  • the display device 1301 can detect the first trigger operation, and then can request the response data from the server 1302. Accordingly, the server can obtain the response data from a database pre-stored with preset mapping relationships. The action identifier of the interactive object that first triggers the operation, and then based on the action identifier, the driving data of the corresponding interactive object is obtained.
  • the server 1302 can render the driving data into an animation of the interactive object through a rendering tool, and then directly deliver the rendered result to the display device 1301, so that the display device 1301 displays the rendered result and presents the interaction An anthropomorphic effect in which the subject responds to the first trigger operation.
  • the server 1302 may also send the driving data to the display device 1301, and the display device 1301 renders the driving data through the built-in rendering tool, and then displays the rendered result, showing that the interactive object is the first An anthropomorphic effect that triggers an action to respond.
  • the server 1302 in the interactive system can provide the main computing capability, so that the display device 1301 does not need to configure too much processing data locally, which reduces the processing pressure of the display device 1301.
  • the interactive object can respond to the user's trigger operation, and the interactive object can respond to the user through anthropomorphic actions, making the interactive process smoother , Can effectively improve the interactive experience.
  • an embodiment of the present disclosure provides an interaction device 1400.
  • the interaction device 1400 includes a receiving module 1401, an obtaining module 1402, and a control module 1403.
  • the receiving module 1401 is configured to receive the first trigger operation on the display device
  • the obtaining module 1402 is configured to obtain the action identifier of the interactive object used to respond to the first trigger operation
  • the control module 1403 is configured to control the interactive object displayed by the display device to respond based on the action identifier, and the response includes an action corresponding to the action identifier of the interactive object.
  • the method when the acquiring module 1402 is used to acquire the action identifier of the interaction object used to respond to the first trigger operation, the method includes:
  • the action identifier corresponding to the first trigger operation is obtained.
  • the method when the acquiring module 1402 is used to acquire the action identifier of the interaction object used to respond to the first trigger operation, the method includes:
  • the action identifier corresponding to the response data used to respond to the first trigger operation is acquired.
  • the response data includes text data
  • the preset mapping relationship includes a preset mapping relationship between the key text data in the text data and the action identifier.
  • the action identifier includes the body action identifier of the interactive object, and/or the display position identifier of the interactive object when the interactive object performs the action.
  • the method when the receiving module 1401 is used to receive the first trigger operation on the display device, the method includes:
  • the action identification includes the physical action identification of the interactive object
  • the action corresponding to the action identification includes the physical action of the interactive object pointing to the target display area of the display device
  • the action identifier includes the display position identifier of the interactive object when the action is made
  • the action corresponding to the action identifier includes the action made by the interactive object on the target display position
  • the action identification includes a physical action identification and a display location identification
  • the action corresponding to the action identification includes a physical action of the interactive object pointing to the target display area at the target display location;
  • the target display area is a preset display area or a display area associated with the preset display area.
  • control module 1403 when the control module 1403 is used to control the interactive object displayed by the display device to respond based on the action identifier, the control module 1403 includes:
  • the display screen includes any of the following screen content: the screen content of the interactive object making the physical action corresponding to the physical action identifier; the interactive object identifies the corresponding target display at the display position Make the screen content of the body action corresponding to the body action mark in the position;
  • control module 1403 when the control module 1403 is used to control the display device to display the display screen of the interactive object, the control module 1403 includes:
  • Control the display device to display the display screen of the interactive object on the background layer of the display interface of the target application, and the display interface is located on the background layer.
  • control module 1403 is also used to:
  • FIG. 15 is a schematic structural diagram of an electronic device 1500 provided by an embodiment of the disclosure.
  • the electronic device 1500 includes: a processor 1501, a memory 1502, and a bus 1503.
  • the memory 1502 stores machine-readable instructions executable by the processor 1501 (for example, the receiving module 1401, the obtaining module 1402, the execution instructions corresponding to the control module 1403 in the apparatus in FIG. 14 and so on).
  • the processing The device 1501 communicates with the memory 1502 through the bus 1503.
  • the processor 1501 executes the following processing: receiving the first trigger operation on the display device; acquiring the interaction used to respond to the first trigger operation The action identification of the object; based on the action identification, the interactive object displayed by the display device is controlled to respond, and the response includes the action corresponding to the action identification of the interactive object.
  • the embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, the processor executes the interaction method described in the foregoing method embodiment.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the computer program product of the interaction method provided by the embodiment of the present disclosure includes a computer-readable storage medium storing program code, and the instructions included in the program code can be used to execute the interaction method described in the above method embodiment. For details, see The above method embodiments will not be repeated here.
  • the embodiments of the present disclosure also provide a computer program, which when executed by a processor causes the processor to implement any one of the methods in the foregoing embodiments.
  • the computer program product can be specifically implemented by hardware, software, or a combination thereof.
  • the computer program product is specifically embodied as a computer storage medium.
  • the computer program product is specifically embodied as a software product, such as a software development kit (Software Development Kit, SDK) and so on.
  • the interaction method, interaction system, device, equipment, and computer program proposed in the embodiments of the present disclosure propose an interaction scheme that can respond to a user's trigger operation by an interaction object, and can use the action identifier corresponding to the first trigger operation to control the interaction
  • the object responds to the user through anthropomorphic actions, and the response includes the action corresponding to the action identifier, which can make the interaction process more realistic and smooth, and can effectively enhance the interaction experience.
  • the interaction solution provided by the embodiments of the present disclosure can also be applied to a scenario where the interactive object introduces the functions provided by the display device, which can facilitate some users who have weak text understanding or do not have time to view text guidance information. The group quickly obtains the required information.
  • the interaction solution provided by the embodiments of the present disclosure can also be applied to other application scenarios with interaction requirements, which is not specifically limited in the present disclosure.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present disclosure.
  • the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the embodiments of the present disclosure are essentially or all or part of the embodiments of the present disclosure can be embodied in the form of a computer software product.
  • the computer software product is stored in a storage medium and includes several instructions to make a A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种交互方法、装置、交互***、电子设备及存储介质,其中,该方法包括:接收对显示设备的第一触发操作(201);获取用于响应所述第一触发操作的交互对象的动作标识(202);基于所述动作标识,控制所述显示设备展示的交互对象进行响应,所述响应中包括与所述交互对象的动作标识对应的动作(203)。

Description

交互方法、装置、交互***、电子设备及存储介质
相关申请的交叉引用
本公开要求于2020年4月13日提交的、申请号为202010285478.9、发明名称为“交互方法、装置、交互***、电子设备及存储介质”的中国专利申请的优先权,该中国专利申请公开的全部内容以引用的方式并入本文中。
技术领域
本公开涉及计算机技术领域,具体涉及一种交互方法、装置、交互***、电子设备及存储介质。
背景技术
人机交互的方式大多为:用户基于按键、触摸、语音进行输入,设备通过在显示屏上呈现图像、文本或虚拟人物进行响应。目前虚拟人物多是在语音助理的基础上改进得到的,用户与虚拟人物的交互还停留在表面上。
发明内容
本公开实施例至少提供一种交互方法、装置、交互***、电子设备及存储介质。
第一方面,本公开实施例提供了一种交互方法,所述方法包括:
接收对显示设备的第一触发操作;
获取用于响应所述第一触发操作的交互对象的动作标识;
基于所述动作标识,控制所述显示设备展示的交互对象进行响应,所述响应中包括与所述交互对象的动作标识对应的动作。
本公开实施例中提出一种能够由交互对象来响应用户的触发操作的方案,可以利用第一触发操作对应的动作标识,控制交互对象通过拟人化的动作对用户做出响应,且响应中包括与动作标识对应的动作,由此可使得交互流程更为逼真及流畅,能够有效提升交互体验。示例性的,本公开实施例也可以应用在由交互对象对显示设备提供的功能进行介绍的场景下,可以便于一些对文字理解能力较弱或者没有时间查看文字引导信息的用户群体快速获取所需信息。
在一种可能的实施方式中,所述获取用于响应所述第一触发操作的交互对象的动作标识,包括:
基于对所述显示设备的触发操作与所述交互对象的动作标识之间的预设映射关系,获取与所述第一触发操作对应的所述动作标识。
在一种可能的实施方式中,所述获取用于响应所述第一触发操作的交互对象的动作标识,包括:
获取用于响应所述第一触发操作的响应数据;
基于对所述显示设备的触发操作的响应数据与所述交互对象的动作标识之间的预设映射关系,获取用于响应所述第一触发操作的响应数据对应的所述动作标识。
在一种可能的实施方式中,所述响应数据包括文本数据,所述预设映射关系包括所述文本数据中的关键文本数据与动作标识之间的预设映射关系。
以上实施方式中,通过配置触发操作或触发操作的响应数据与动作标识之间的预设映射关系,可以在接收到第一触发操作后,快速精准地查找到用于响应第一触发操作的动作标识,以便控制交互对象做出与动作标识对应的动作,以响应第一触发操作。
在一种可能的实施方式中,所述动作标识包括所述交互对象的肢体动作标识,和/或,所述交互对象在做出动作时的展示位置标识。
该实施例中,既可以通过肢体动作标识,标识交互对象在进行响应过程中所做出的具体肢体动作,还可以通过展示位置标识,标识交互对象在进行响应过程中展示在显示设备的显示区域中的位置,通过以上两种标识中的至少一种,可以提升交互对象在展示过程中的展示效果。
在一种可能的实施方式中,所述接收对显示设备的第一触发操作,包括:
接收对所述显示设备的目标应用程序的第一触发操作;或者,
接收对所述显示设备的目标应用程序的目标功能选项的第一触发操作,所述目标功能选项位于所述目标应用程序的显示界面中的预设展示区域。
在一种可能的实施方式中,在所述动作标识包括所述交互对象的肢体动作标识的情况下,所述动作标识对应的动作包括所述交互对象指向所述显示设备的目标展示区域的肢体动作;
在所述动作标识包括所述交互对象在做出动作时的展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出的动作;
在所述动作标识包括所述肢体动作标识和所述展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出指向所述目标展示区域的肢体动作。
上述实施方式中,交互对象的肢体动作标识可以是指向性的动作标识,例如指向某个具体的区域,这样,可以使用户在交互过程中能够快速获知交互对象当前所响应的具体内容,使得交互过程更为逼真流畅,和/或,交互对象的展示位置标识可以是标识交互对象在目标展示位置上做出的指向性动作或其它动作,这样,可以便于用户观看响应内容,避免可能存在的遮挡等问题,能够达到更好的交互效果。
在一种可能的实施方式中,基于所述动作标识,控制所述显示设备展示的交互对象进行响应,包括:
获取与所述动作标识对应的驱动数据;
利用所述驱动数据,渲染所述交互对象的显示画面,所述显示画面中包括以下画面内容中的任一种:所述交互对象做出与肢体动作标识对应的肢体动作的画面内容;所述交互对象在展示位置标识对应的目标展示位置上做出与所述肢体动作标识对应的肢体动作的画面内容;
控制所述显示设备显示所述交互对象的显示画面。
在一种可能的实施方式中,所述控制所述显示设备显示所述交互对象的显示画面,包括:
控制所述显示设备在目标应用程序的显示界面的背景层上显示所述交互对象的显示画面,所述显示界面位于所述背景层之上。
上述实施方式中,通过目标应用程序和交互对象的显示画面分层次的处理,两者可以分别进行响应,可以避免交互对象的响应过程与目标应用程序的运行可能出现的冲突。
在一种可能的实施方式中,所述方法还包括:
获取所述第一触发操作对应的响应数据;
控制所述显示设备播放所述响应数据中的语音数据,和/或,在所述显示设备的界面上展示所述响应数据的提示框。
上述实施方式中,并不限定于进行与交互对象的动作标识相关的响应,还可以通过播放语音或者展示提示框等方式来实现响应,使得响应数据的呈现方式多样化,提升交互体验。
第二方面,本公开实施例提供了一种交互***,包括:显示设备和服务器;
所述显示设备,用于获取对显示设备的第一触发操作并将所述第一触发操作发送给所述服务器,并且基于所述服务器的指示控制所述显示设备展示的交互对象进行响应;
所述服务器,用于接收所述第一触发操作;获取用于响应所述第一触发操作的交互对象的动作标识;基于所述动作标识,指示所述显示设备控制所述交互对象进行响应,所述响应中包括与所述交互对象的动作标识对应的动作。
第三方面,本公开实施例提供了一种交互装置,所述装置包括:
接收模块,用于接收对显示设备的第一触发操作;
获取模块,用于获取用于响应所述第一触发操作的交互对象的动作标识;
控制模块,用于基于所述动作标识,控制所述显示设备展示的交互对象进行响应,所述响应中包括与所述交互对象的动作标识对应的动作。
在一种可能的实施方式中,所述获取模块在用于获取用于响应所述第一触发操作的交互对象的动作标识时,包括:
基于对所述显示设备的触发操作与所述交互对象的动作标识之间的预设映射关系,获取与所述第一触发操作对应的所述动作标识。
在一种可能的实施方式中,所述获取模块在用于获取用于响应所述第一触发操作的交互对象的动作标识时,包括:
获取用于响应所述第一触发操作的响应数据;
基于对所述显示设备的触发操作的响应数据与所述交互对象的动作标识之间的预设映射关系,获取用于响应所述第一触发操作的响应数据对应的所述动作标识。
在一种可能的实施方式中,所述响应数据包括文本数据,所述预设映射关系包括所述文本数据中的关键文本数据与动作标识之间的预设映射关系。
在一种可能的实施方式中,所述动作标识包括所述交互对象的肢体动作标识,和/或,所述交互对象在做出动作时的展示位置标识。
在一种可能的实施方式中,所述接收模块在用于接收对显示设备的第一触发操作时,包括:
接收对所述显示设备的目标应用程序的第一触发操作;或者,
接收对所述显示设备的目标应用程序的目标功能选项的第一触发操作,所述目标功能选项位于所述目标应用程序的显示界面中的预设展示区域。
在一种可能的实施方式中,在所述动作标识包括所述交互对象的肢体动作标识的情况下,所述动作标识对应的动作包括所述交互对象指向所述显示设备的目标展示区域的肢体动作;
在所述动作标识包括所述交互对象在做出动作时的展示位置标识的情况下,所述动 作标识对应的动作包括所述交互对象在目标展示位置上做出的动作;
在所述动作标识包括所述肢体动作标识和所述展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出指向所述目标展示区域的肢体动作;
其中,所述目标展示区域为所述预设展示区域或者为与所述预设展示区域关联的展示区域。
在一种可能的实施方式中,所述控制模块在用于基于所述动作标识,控制所述显示设备展示的交互对象进行响应时,包括:
获取与所述动作标识对应的驱动数据;
利用所述驱动数据,渲染所述交互对象的显示画面,所述显示画面中包括以下画面内容中的任一种:所述交互对象做出与肢体动作标识对应的肢体动作的画面内容;所述交互对象在展示位置标识对应的目标展示位置上做出与所述肢体动作标识对应的肢体动作的画面内容;
控制所述显示设备显示所述交互对象的显示画面。
在一种可能的实施方式中,所述控制模块在用于控制所述显示设备显示所述交互对象的显示画面时,包括:
控制所述显示设备在目标应用程序的显示界面的背景层上显示所述交互对象的显示画面,所述显示界面位于所述背景层之上。
在一种可能的实施方式中,所述控制模块还用于:
获取所述第一触发操作对应的响应数据;
控制所述显示设备播放所述响应数据中的语音数据,和/或,在所述显示设备的界面上展示所述响应数据的提示框。
第四方面,本公开实施例提供了一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时使所述处理器执行如第一方面所述的交互方法。
第五方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时使所述处理器执行如第一方面所述的交互方法。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
图1示出了本公开实施例所提供的一种显示设备的示意图;
图2示出了本公开实施例所提供的一种交互方法的流程图;
图3示出了本公开实施例所提供的一种基于第一触发操作的响应过程的流程示意图;
图4示出了本公开实施例所提供的另一种基于第一触发操作的响应过程的流程示意图;
图5示出了本公开实施例所提供的第一种展示有交互对象的响应内容的显示界面的 示意图;
图6示出了本公开实施例所提供的第二种展示有交互对象的响应内容的显示界面的示意图;
图7示出了本公开实施例所提供的第三种展示有交互对象的响应内容的显示界面的示意图;
图8示出了本公开实施例所提供的一种显示界面的九宫格示意图;
图9示出了本公开实施例所提供的一种交互对象的界面示意图;
图10示出了本公开实施例所提供的另一种交互对象的界面示意图;
图11示出了本公开实施例所提供的一种控制显示设备展示的交互对象进行响应的具体处理流程示意图;
图12示出了本公开实施例所提供的一种交互对象的显示画面示意图;
图13示出了本公开实施例所提供的一种交互***的结构示意图;
图14示出了本公开实施例所提供的一种交互装置的结构示意图;
图15示出了本公开实施例所提供的一种电子设备的结构示意图。
具体实施方式
为使本公开实施例的目的、特征和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
本公开至少一个实施例提供了一种交互方法,该交互方法可以由显示设备或服务器等电子设备执行,显示设备例如为终端设备,终端设备可以是固定终端或移动终端,例如手机、平板电脑、游戏机、台式机、广告机、车载终端、虚拟现实(Virtual Reality,VR)设备、增强现实(Augmented Reality,AR)设备等。服务器可包括本地服务器或云端服务器等。本公开并不限定显示设备和服务器的具体形式。
在本公开实施例中,在显示设备上可以显示交互对象,交互对象可以是任意一种能够与目标对象进行交互的交互对象,其可以是虚拟人物,还可以是虚拟动物、虚拟物品、卡通形象等等任何能够实现交互功能的虚拟形象,虚拟形象的展现形式既可以是2D形式也可以是3D形式,本公开对此并不限定。目标对象可以是自然人,也可以是机器人,还可以是其他智能设备。交互对象和目标对象之间的交互方式可以是主动交互方式,也可以是被动交互方式。在一示例中,目标对象可以通过做出手势或者肢体动作来发出需求,通过主动交互的方式来触发交互对象与其交互。在另一示例中,交互对象可以通过主动打招呼、提示目标对象做出动作等方式,使得目标对象采用被动方式与交互对象进 行交互。
在一些实施例中,图1示出本公开至少一个实施例提出的显示设备。该显示设备为具有透明显示屏的显示设备,其可以在透明显示屏上显示立体画面,以呈现出具有立体效果的交互对象。例如图1中透明显示屏显示的交互对象有虚拟卡通人物。
在一些实施例中,显示设备也可以为手机或者平板电脑等移动终端,在移动终端上可以安装有能够展示交互对象的应用程序(Application,APP),比如为实现交互对象与用户之间的交互的专属应用程序,或者,也可以为配置有实现交互对象的交互能力的软件开发工具包(Software Development Kit,SDK)的通用应用程序。比如,在一些银行APP上,可以嵌入实现交互对象的交互能力的SDK,进而在运行银行APP的情况下,可以根据需求调取交互对象来实现与用户之间的交互。
示例性的,上述显示设备中可以配置有存储器和处理器,存储器用于存储可在处理器上运行的计算机指令,处理器用于在执行计算机指令时实现本公开提供的交互方法,以使透明显示屏中显示的交互对象对目标对象进行响应。
本公开实施例中提出一种能够由交互对象来响应用户的触发操作的方案,交互对象可通过拟人化的动作对用户做出响应,使得交互流程更为流畅,能够有效提升交互体验。示例性的,本公开实施例也可以应用在由交互对象对显示设备提供的功能进行介绍的场景下,可以便于一些对文字理解能力较弱或者没有时间查看文字引导信息的用户群体快速获取所需信息。
下面,结合具体实施例对本公开进行详细说明。
图2为本公开实施例提供的交互方法的流程图,参照图2所示,所述交互方法包括步骤201-步骤203,其中:
步骤201,接收对显示设备的第一触发操作。
步骤202,获取用于响应第一触发操作的交互对象的动作标识。
步骤203,基于动作标识,控制显示设备展示的交互对象进行响应,响应中包括与交互对象的动作标识对应的动作。
在一些实施例中,上述交互方法可以由显示设备执行,即在本地完成对第一触发操作的响应;上述交互方法也可以由服务器执行,即由服务器完成用于响应第一触发操作的数据的获取,并指示显示设备的交互对象进行响应。
在上述交互方法由显示设备来执行的情况下,步骤201中接收对显示设备的第一触发操作,可以由显示设备检测是否存在第一触发操作。比如,可以通过检测在显示设备的显示屏上是否有触摸操作,来确定是否存在第一触发操作;或者,通过检测显示设备采集的图像中是否有设定的用户面部表情或用户肢体动作,来确定是否存在第一触发操作;或者,通过检测显示设备采集的语音指示,来确定是否存在第一触发操作等。具体检测方式可以通过显示设备配置的传感器等支持的检测能力来配置,本公开对此并不限定。
在上述交互方法由服务器来执行的情况下,显示设备可以将检测到的第一触发操作上报给服务器,以便服务器来获取用于响应第一触发操作的各类数据,进而将显示设备所需的数据发送给显示设备,由显示设备展示交互对象的响应。
在一些实施例中,第一触发操作可以用于请求显示设备提供某种功能或某种数据。示例性的,第一触发操作可以为对显示设备的目标应用程序的触发操作,比如,点 击目标应用程序的图标的触发操作,以请求显示设备开启目标应用程序,以提供某种服务。或者,第一触发操作还可以为对显示设备的目标应用程序的目标功能选项的触发操作,比如,点击目标应用程序中的目标功能选项的触发操作,以请求目标应用程序启动目标功能选项对应的功能等。触发操作的具体方式如上述说明,既可以是对显示设备的接触式操作,也可以是无接触式的操作,比如通过做出某种手势等动作或者输入语音的方式等。
本公开实施例中,既可以直接对第一触发操作进行响应,也可以先获取第一触发操作对应的响应数据进而做出响应,下面以两个具体的实施流程为例来说明基于第一触发操作的响应过程。
图3所示,为本公开实施例提供的一种基于第一触发操作的响应过程的流程示意图,包括如下步骤:
步骤301,接收对显示设备的第一触发操作。
步骤302,基于对显示设备的触发操作与交互对象的动作标识之间的预设映射关系,获取与第一触发操作对应的动作标识。
步骤303,基于动作标识,控制显示设备展示的交互对象进行响应,响应中包括与交互对象的动作标识对应的动作。
图4所示,为本公开实施例提供的另一种基于第一触发操作的响应过程的流程示意图,包括如下步骤:
步骤401,接收对显示设备的第一触发操作。
步骤402,获取用于响应第一触发操作的响应数据。
步骤403,基于对显示设备的触发操作的响应数据与交互对象的动作标识之间的预设映射关系,获取用于响应第一触发操作的响应数据对应的动作标识。
步骤404,基于动作标识,控制显示设备展示的交互对象进行响应,响应中包括与交互对象的动作标识对应的动作。
以上实施例中,通过配置触发操作或触发操作的响应数据与动作标识之间的预设映射关系,可以在接收到第一触发操作后,快速精准地查找到用于响应第一触发操作的动作标识,以便控制交互对象做出与动作标识对应的动作,以响应第一触发操作。
对于步骤301和步骤401的有关描述可以参见上文相关部分,这里不再重复赘述。
考虑到不同的交互场景下,用户通过第一触发操作所请求的内容各式各样,响应第一触发操作的内容也存在多样性,故针对一些较为简单的交互场景下,本公开实施例可以配置第一触发操作与交互对象的动作标识之间的预设映射关系,列举一个简单的例子,第一触发操作可以为用户做出的打招呼的手势动作,那么可以直接将第一触发操作中打招呼的手势动作与交互对象响应的打招呼的手势动作之间建立映射关系,无需额外的响应数据。
在一些较为复杂的交互场景下,也可以配置第一触发操作的响应数据与交互对象的动作标识之间的预设映射关系。在一些实施方式中,可以通过第一触发操作识别用户的交互意图,然后根据交互意图,查找到符合该交互意图的响应数据。其中,响应数据可以是预先存储好的,也可以是通过网络向其它内容服务器请求获取到的,本公开对此并不限定。
当然,具体实施中,用于响应第一触发操作的数据除了动作标识之外,还可以包括响应数据,响应数据包括但不限于交互对象针对该第一触发操作的响应数据,还可以包括显示设备上目标应用程序针对该第一触发操作的响应数据等。其中,预设映射关系具体是建立在第一触发操作与交互对象的动作标识之间,还是建立在第一触发操作的响应数据与交互对象的动作标识之间,可以视实际交互场景的需求来配置,并且对于同一交互场景下可支持上述两种预设映射关系的配置。
响应数据的形式有很多种,在一种可能的实施方式中,响应数据可以包括文本数据,那么预设映射关系可以包括文本数据中的关键文本数据与动作标识之间的预设映射关系。列举一个简单的例子,第一触发操作为用户请求开启目标应用程序的触发操作,尤其是用户首次开启目标应用程序的交互场景下,可以由交互设备来介绍目标应用程序的使用说明,那么响应数据可以为目标应用程序中各个功能选项的使用说明,功能选项可作为关键文本数据,功能选项可与交互对象的动作标识建立预设映射关系,在该交互场景下,交互对象的动作标识例如为指向该功能选项的肢体动作的标识,这样,在展示交互对象的响应内容的过程中,可以在显示设备的界面中呈现出交互对象指向所介绍的各个功能选项的展示效果。
其中,预设映射关系可以是后台预先配置好的,用于记录不同触发操作对应的交互对象的特定响应方式,其中,特定响应方式可以通过特定的动作标识来标记。比如,第一触发操作为请求开启目标应用程序的触发操作,则特定的动作标识可以标识开启目标应用程序后,交互对象打招呼的动作,或者,交互对象介绍目标应用程序的使用说明过程中指向使用说明涉及的各个功能选项的动作。当然,在具体实施中,预设映射关系也可以是基于深度学习算法进行反复学习后得到的,这样,在接收到第一触发操作或第一触发操作的响应数据之后,可以通过深度学习模型来预测相映射的交互对象的动作标识。
在一些实施例中,预设映射关系中第一触发操作或第一触发操作的响应数据,所对应的动作标识可以是至少一个,也就是说,一个第一触发操作可映射一个动作标识,也可以映射至少两个动作标识,这样,交互对象在进行响应时可以做出与一个动作标识对应的特定动作,也可以做出与至少两个动作标识对应的一系列的特定动作。其中,在映射有至少两个动作标识的情况下,至少两个动作标识可以具备排列关系,比如按照执行先后顺序来配置至少两个动作标识的排列关系,如通过添加执行的时间戳等方式。
其中,交互对象的动作标识,用于标识交互对象做出的特定动作,以便在交互对象进行响应的过程中,能够做出与动作标识对应的动作。在一些实施例中,动作标识包括交互对象的肢体动作标识,和/或,交互对象在做出动作时的展示位置标识。既可以通过肢体动作标识,标识交互对象在进行响应过程中所做出的具体肢体动作,还可以通过展示位置标识,标识交互对象在进行响应过程中展示在显示设备的显示区域中的位置,通过以上两种标识中的至少一种,可以提升交互对象在展示过程中的展示效果。
肢体动作标识用于标识交互对象的特定肢体动作,特定肢体动作可以是交互对象的头部的动作,也可以是交互对象的身体躯干的动作,头部的动作也可以包括面部表情动作。展示位置标识用于标识交互对象在做出动作时的特定展示位置,特定展示位置为显示设备的界面上的特定展示位置,在该特定展示位置上能够便于用户观看响应内容,避免可能存在的遮挡等问题,能够达到更好的交互效果。
在一些实施例中,在动作标识包括交互对象的肢体动作标识的情况下,动作标识对应的动作包括但不限于交互对象指向显示设备的目标展示区域的肢体动作。交互对象的指向性的肢体动作,可以应用于对显示设备的界面上的有关功能进行说明的场景下, 示例性的,可以是对于目标应用程序上的某些功能进行介绍的场景下。在该场景下,接收的第一触发操作,可以是对显示设备的目标应用程序的触发操作,也可以是对显示设备的目标应用程序的目标功能选项的触发操作。通过指向性的动作标识可以使用户在交互过程中能够快速获知交互对象当前所响应的具体内容,使得交互过程更为逼真流畅,和/或,通过交互对象的展示位置标识来标识做出指向性或其它动作时的位置,这样,可以便于用户观看响应内容,避免可能存在的遮挡等问题,能够达到更好的交互效果。
下面列举几种在配置了不同动作标识的情况下的可能的交互过程:
示例一,在动作标识包括交互对象的肢体动作标识的情况下,动作标识对应的动作包括交互对象指向显示设备的目标展示区域的肢体动作。有一种可能的交互过程为:
接收到对显示设备的目标应用程序的第一触发操作之后,可以获取该第一触发操作对应的响应数据,响应数据包括目标应用程序上的功能介绍的文本数据,进一步地,基于响应数据与交互对象的动作标识之间的预设映射关系,可以获取到与文本数据中的关键文本数据对应的交互对象的动作标识。其中,关键文本数据例如为描述目标应用程序上的特定功能的文本数据,比如,描述目标应用程序上第一功能的文本数据,第一功能的功能选项可以在目标应用程序的显示界面的预设展示区域。为了便于用户快速找到该第一功能的位置并熟悉该第一功能的操作说明,可以将交互对象的动作标识配置为指向该第一功能所在的预设展示区域的动作的标识。这样,可以基于交互对象的动作标识,控制显示设备展示交互对象进行响应的内容,响应的内容可以包括指向第一功能所在的预设展示区域的动作。当然,响应的内容还可以包括其它形式的内容,比如回复一段语音或者展示一些提示内容等,本公开对此并不限定。其中,参照图5所示,为列举的一种展示有交互对象的响应内容的显示界面,交互对象的响应内容中包括指向“麦克风”功能的动作,且还展示有对应的麦克风的操作说明。
示例二,在动作标识包括交互对象在做出动作时的展示位置标识的情况下,动作标识对应的动作包括交互对象在目标展示位置上做出的动作。有一种可能的交互过程为:
接收到对目标应用程序的目标功能选项的第一触发操作之后,可以获取第一触发操作对应的响应数据,响应数据包括目标功能选项提供的功能相关数据,进一步地,基于触发操作与交互对象的动作标识之间的映射关系,可以获取到与第一触发操作对应的交互对象的动作标识。为了便于用户快速了解目标功能选项提供的功能,可以使交互对象在目标展示位置上做出介绍该目标功能选项提供的功能的动作。这样,可以基于交互对象的动作标识,控制显示设备展示交互对象进行响应的内容,响应的内容可以包括交互对象在目标展示位置上做出的动作。其中,参照图6所示,为列举的一种展示有交互对象的响应内容的显示界面,目标功能选项例如为目标应用程序中转账选项提供的转账功能,在目标应用程序跳转到转账显示界面之后,可以展示交互对象的响应内容,为了便于用户观看交互对象的响应内容,交互对象的动作标识中可配置有展示位置标识,以便交互对象在转账操作区域的左下方的目标展示位置上介绍有关转账的功能。
示例三,在动作标识包括肢体动作标识和展示位置标识的情况下,动作标识对应的动作包括交互对象在目标展示位置上做出指向目标展示区域的肢体动作。有一种可能的交互过程为:
可以在示例一所描述的交互流程的基础上,增加交互对象的展示位置标识与响应数据的预设映射关系的配置。这样,可以基于交互对象的肢体动作标识和展示位置标识,控制显示设备展示交互对象进行响应的内容,响应的内容可以包括交互对象在展示 位置标识对应的目标展示位置上做出与肢体动作标识对应的肢体动作。其中,可继续参照图5和图7所示,为列举的一种展示有交互对象的响应内容的显示界面,交互对象的响应内容中包括在展示位置标识对应的目标展示位置“B”上指向“麦克风”功能的动作,且还展示有对应的麦克风的操作说明。其中,显示界面中的“A”、“B”、“C”的位置标记仅为示例性说明,且是出于便于理解的目的标记在显示界面中,实际应用过程中“A”、“B”、“C”可无需显示。
在以上提供的示例一至示例三中,交互对象指向的目标展示区域可以是被触发的目标功能选项所在的预设展示区域,也可以是与被触发的目标应用程序关联的展示区域,还可以是与被触发的目标功能选项关联的展示区域。同理,目标展示位置,也可以是基于被触发的目标功能选项和被触发的目标应用程序来确定。
需要说明的是,在实际应用中,可基于具体交互需求来配置预设映射关系中的肢体动作标识和展示位置标识,本公开对此并不限定。
为便于理解,下面结合图5和图7给出的上述示例一和示例三的交互场景,举例说明一种预设映射关系的具体配置方式。
参照图8所示,可以将目标应用程序的显示界面分成九宫格形式,包括“左上”、“上”、“右上”、“左”、“中”、“右”、“左下”、“下”、“右下”这九个展示区域。每个展示区域中可包括目标应用程序的不同功能选项。
为了使交互对象做出响应时能够指向对应的功能选项,交互对象的肢体动作标识可包括六组肢体动作,包括图9和图10所示的“左上”,“左”,“左下”,“右上”,“右”,“右下”,交互对象的展示位置标识可包括三个展示位置“A”,“B”,“C”。
在配置预设映射关系时,如第一触发操作所触发的目标应用程序的目标功能选项在“左上”,“左”,“左下”,“右上”,“右”,“右下”这六个展示区域的其中之一,那么,与第一触发操作或第一触发操作的响应数据存在映射关系的展示位置标识可以是“B”,与第一触发操作或第一触发操作的响应数据存在映射关系的肢体动作标识可以为“左上”,“左”,“左下”,“右上”,“右”,“右下”的其中之一,具体选择哪个肢体动作标识由被触发的目标功能选项所在展示区域来确定。比如,若被触发的目标功能选项所在展示区域在“左上”,则可选择肢体动作标识为“右上”。
若第一触发操作所触发的目标应用程序的目标功能选项在“上”,“中”,“下”这三个展示区域的其中之一,那么,与第一触发操作或第一触发操作的响应数据存在映射关系的展示位置标识可以是“A”或者“C”,与第一触发操作或第一触发操作的响应数据存在映射关系的肢体动作标识可以为“左上”,“左”,“左下”,“右上”,“右”,“右下”的其中之一,具体选择哪个肢体动作标识由被触发的目标功能选项所在展示区域来确定。比如,若被触发的目标功能选项所在展示区域在“上”,展示位置标识为“A”,则可选择肢体动作标识为“左上”;或者,若被触发的目标功能选项所在展示区域在“上”,展示位置标识为“C”,则可选择肢体动作标识为“右上”。
其中,在响应数据为文本数据,预设映射关系为关键文本与交互对象的动作标识之间的映射关系时,关键文本例如可以是用于描述上述目标功能选项所在的展示区域的文本,可以直接用“上方的”、“右上方的”、“下方的”这种具备明确方位信息的文本来作为关键文本,当然,也可以用目标功能选项的名称作为关键文本,其中,目标功能选项的名称已预先记录有该目标功能选项所在的展示区域。
以上仅为本公开实施例的可行性方式的举例说明,在实际应用中,可根据实际 交互需求来合理配置预设映射关系,本公开对此并不限定。
继续参照图2至图4所示的交互方法的实施流程,针对步骤203、步骤303以及步骤404中所示的,基于动作标识,控制显示设备展示的交互对象进行响应的具体处理流程,可参见图11所示,包括如下步骤:
步骤4041,获取与动作标识对应的驱动数据。
在一些实施例中,驱动数据用于调整交互对象的显示状态,交互对象作为虚拟形象,其后台记录的是交互对象的3D模型或2D模型,通过驱动数据调整3D模型或2D模型中相关部位的参数,进而可以改变交互对象的显示状态。其中,相关部位包括但不限于头部、肢体各个关节部位、面部表情等。在获取到动作标识之后,动作标识可以反映出交互对象待呈现的显示状态,故可以获取与动作标识对应的驱动数据。在本公开实施例应用在显示设备的情况下,驱动数据既可以在本地数据库中存储,也可以在云端数据库中存储;在本公开实施例应用在服务器的情况下,驱动数据既可以在服务器本身存储单元上存储,也可以在其它业务相关服务器上存储,本公开对此并不限定。
步骤4042,利用驱动数据,渲染交互对象的显示画面。
在一些实施例中,获取驱动数据之后可以利用内置的渲染工具渲染出交互对象的显示画面,具体使用的渲染工具,本公开并不限定。
其中,结合以上实施例中的描述,显示画面中可以包括以下画面内容中的任一种:交互对象做出与肢体动作标识对应的肢体动作的画面内容;交互对象在展示位置标识对应的目标展示位置上做出与肢体动作标识对应的肢体动作的画面内容。有关画面内容的示例,可以参照图5至图7所示。
步骤4043,控制显示设备显示交互对象的显示画面。
在本公开实施例应用在显示设备的情况下,显示画面可以在本地渲染成功后直接呈现在显示设备上;在本公开实施例应用在服务器的情况下,服务器可以将渲染成功的显示画面发送给显示设备,然后由显示设备来进行呈现。
此外,在以上实施例中,响应数据包括但不限于交互对象针对该第一触发操作的响应数据、显示设备上目标应用程序针对该第一触发操作的响应数据等。
示例性的,还可以获取第一触发操作对应的响应数据,进而可以控制显示设备播放响应数据中的语音数据,和/或,在显示设备的界面上展示响应数据的提示框等,本公开对此并不限定。上述实施方式并不限定于进行与交互对象的动作标识相关的响应,还可以通过播放语音或者展示提示框等方式来实现响应,使得响应数据的呈现方式多样化,提升交互体验。
其中,在显示设备显示交互对象的显示画面的情况下,若交互对象的显示画面显示在目标应用程序的显示界面上,则可以使目标应用程序的响应内容的层级处于交互对象的响应内容的层级之上。通过目标应用程序和交互对象的显示画面分层次的处理,两者可以分别进行响应,可以避免交互对象的响应过程与目标应用程序的运行可能出现的冲突。示例性的,可以将交互对象的响应内容设置在目标应用程序的背景层上,这样,控制显示设备在目标应用程序的显示界面的背景层上显示交互对象的显示画面,显示界面位于背景层之上,其显示效果可参见图12所示,交互对象的显示画面在背景层,在背景层之上可以显示目标应用程序的响应内容。
在一些实施例中,在利用交互对象的驱动数据,渲染交互对象的显示画面的过程中,为了使交互对象呈现出来的效果更为逼真,本公开实施例可以识别驱动数据的驱 动模式,响应于驱动模式,根据驱动数据获取交互对象的控制参数,根据控制参数控制交互对象的姿态。
示例性的,根据驱动数据的类型,获取驱动数据对应的语音数据序列,语音数据序列包括多个语音数据单元;若检测到所述语音数据单元中包括目标数据,则确定所述驱动数据的驱动模式为第一驱动模式,目标数据与交互对象的预设控制参数相对应;进而,可以响应于第一驱动模式,将目标数据对应的预设控制参数,作为所述交互对象的控制参数。其中,目标数据包括关键词或关键字,关键词或关键字与交互对象的设定动作的预设控制参数相对应。
若未检测到语音数据单元中包括目标数据,则确定驱动数据的驱动模式为第二驱动模式;进而可以响应于第二驱动模式,获取语音数据序列中的至少一个语音数据单元的特征信息;获取与特征信息对应的交互对象的控制参数。
其中,上述语音数据序列包括音素序列,在获取所述语音数据序列中的至少一个语音数据单元的特征信息时,可以对所述音素序列进行特征编码,获得所述音素序列对应的第一编码序列;根据第一编码序列,获取至少一个音素对应的特征编码;根据特征编码,获得至少一个音素的特征信息。
上述语音数据序列还可以包括语音帧序列,在获取所述语音数据序列中的至少一个语音数据单元的特征信息时,还可以获取所述语音帧序列对应的第一声学特征序列,所述第一声学特征序列包括与所述语音帧序列中的每个语音帧对应的声学特征向量;根据所述第一声学特征序列,获取至少一个语音帧对应的声学特征向量;根据所述声学特征向量,获得所述至少一个语音帧对应的特征信息。
其中,上述交互对象的控制参数包括所述交互对象的至少一个局部区域的控制向量。示例性的,可以将上述特征信息输入至循环神经网络,获得与特征信息对应的交互对象的控制参数。上述根据所述驱动数据获取所述交互对象的控制参数,可以包括:根据所述驱动数据获取交互对象的至少一个局部区域的控制向量;根据控制参数控制交互对象的姿态,可以包括:根据所获取的所述至少一个局部区域的控制向量,控制所述交互对象的面部动作和/或肢体动作。
当然,具体实施中也可以包括其它驱动模式,本公开对此并不限定。
参照图13所示,本公开实施例提供了一种交互***,所述交互***包括显示设备1301和服务器1302。
其中,显示设备1301,用于获取对显示设备1301的第一触发操作并将第一触发操作发送给服务器1302,基于服务器1302的指示控制显示设备展示的交互对象进行响应;
服务器1302,用于接收第一触发操作,获取用于响应第一触发操作的交互对象的动作标识;基于动作标识,指示显示设备1301控制交互对象进行响应;响应中包括与交互对象的动作标识对应的动作。
示例性的,在该交互***中,显示设备1301可以检测到第一触发操作,进而可以向服务器1302请求响应数据,相应地,服务器可以从预先存储有预设映射关系的数据库中获取用于响应第一触发操作的交互对象的动作标识,进而基于该动作标识,获取到对应的交互对象的驱动数据。
在一些实施例中,服务器1302可以通过渲染工具将驱动数据,渲染成交互对象的动画,然后直接将渲染后的结果下发至显示设备1301,以便显示设备1301展示渲染 后的结果,呈现出交互对象对第一触发操作进行响应的拟人化效果。
在另一些实施例中,服务器1302也可以将驱动数据下发给显示设备1301,由显示设备1301通过内置的渲染工具对驱动数据进行渲染,进而展示渲染后的结果,呈现出交互对象对第一触发操作进行响应的拟人化效果。
该交互***中服务器1302可以提供主要的计算能力,这样显示设备1301可以无需本地配置过多的处理数据,减轻显示设备1301的处理压力。
通过该交互***中显示设备1301和服务器1302之间的数据处理流程,能够实现由交互对象来响应用户的触发操作,交互对象可通过拟人化的动作对用户做出响应,使得交互流程更为流畅,能够有效提升交互体验。
参见图14所示,本公开实施例提供了一种交互装置1400,该交互装置1400包括接收模块1401、获取模块1402和控制模块1403。
接收模块1401,用于接收对显示设备的第一触发操作;
获取模块1402,用于获取用于响应第一触发操作的交互对象的动作标识;
控制模块1403,用于基于动作标识,控制显示设备展示的交互对象进行响应,响应中包括与交互对象的动作标识对应的动作。
在一种可能的实施方式中,获取模块1402在用于获取用于响应第一触发操作的交互对象的动作标识时,包括:
基于对显示设备的触发操作与交互对象的动作标识之间的预设映射关系,获取与第一触发操作对应的动作标识。
在一种可能的实施方式中,获取模块1402在用于获取用于响应第一触发操作的交互对象的动作标识时,包括:
获取用于响应第一触发操作的响应数据;
基于对显示设备的触发操作的响应数据与交互对象的动作标识之间的预设映射关系,获取用于响应第一触发操作的响应数据对应的动作标识。
在一种可能的实施方式中,响应数据包括文本数据,预设映射关系包括文本数据中的关键文本数据与动作标识之间的预设映射关系。
在一种可能的实施方式中,动作标识包括交互对象的肢体动作标识,和/或,交互对象在做出动作时的展示位置标识。
在一种可能的实施方式中,接收模块1401在用于接收对显示设备的第一触发操作时,包括:
接收对显示设备的目标应用程序的第一触发操作;或者,
接收对显示设备的目标应用程序的目标功能选项的第一触发操作,目标功能选项位于目标应用程序的显示界面中的预设展示区域。
在一种可能的实施方式中,在动作标识包括交互对象的肢体动作标识的情况下,动作标识对应的动作包括交互对象指向显示设备的目标展示区域的肢体动作;
在动作标识包括交互对象在做出动作时的展示位置标识的情况下,动作标识对应的动作包括交互对象在目标展示位置上做出的动作;
在动作标识包括肢体动作标识和展示位置标识的情况下,动作标识对应的动作 包括交互对象在目标展示位置上做出指向目标展示区域的肢体动作;
其中,目标展示区域为预设展示区域或者为与预设展示区域关联的展示区域。
在一种可能的实施方式中,控制模块1403在用于基于动作标识,控制显示设备展示的交互对象进行响应时,包括:
获取与动作标识对应的驱动数据;
利用驱动数据,渲染交互对象的显示画面,显示画面中包括以下画面内容中的任一种:交互对象做出与肢体动作标识对应的肢体动作的画面内容;交互对象在展示位置标识对应的目标展示位置上做出与肢体动作标识对应的肢体动作的画面内容;
控制显示设备显示交互对象的显示画面。
在一种可能的实施方式中,控制模块1403在用于控制显示设备显示交互对象的显示画面时,包括:
控制显示设备在目标应用程序的显示界面的背景层上显示交互对象的显示画面,显示界面位于背景层之上。
在一种可能的实施方式中,控制模块1403还用于:
获取第一触发操作对应的响应数据;
控制显示设备播放响应数据中的语音数据,和/或,在显示设备的界面上展示响应数据的提示框。
参见图2提供的交互方法流程,本公开实施例还提供了一种电子设备1500。图15为本公开实施例提供的电子设备1500的结构示意图,如图15所示,电子设备1500包括:处理器1501、存储器1502、和总线1503。存储器1502存储有处理器1501可执行的机器可读指令(比如,图14中的装置中的接收模块1401、获取模块1402、控制模块1403对应的执行指令等),当电子设备1500运行时,处理器1501与存储器1502之间通过总线1503通信,机器可读指令被处理器1501执行时使处理器1501执行如下处理:接收对显示设备的第一触发操作;获取用于响应第一触发操作的交互对象的动作标识;基于动作标识,控制显示设备展示的交互对象进行响应,响应中包括与交互对象的动作标识对应的动作。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时使该处理器执行上述方法实施例中所述的交互方法。其中,该存储介质可以是易失性或非易失性的计算机可读存储介质。
本公开实施例所提供的交互方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的交互方法,具体可参见上述方法实施例,在此不再赘述。
本公开实施例还提供一种计算机程序,该计算机程序被处理器执行时使该处理器实现前述实施例的任意一种方法。该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质。在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
本公开实施例中提出的交互方法、交互***、装置、设备及计算机程序,提出一种能够由交互对象来响应用户的触发操作的交互方案,可以利用第一触发操作对应的 动作标识,控制交互对象通过拟人化的动作对用户做出响应,且响应中包括与动作标识对应的动作,由此可使得交互流程更为逼真及流畅,能够有效提升交互体验。示例性的,本公开实施例提供的交互方案,也可以应用在由交互对象对显示设备提供的功能进行介绍的场景下,可以便于一些对文字理解能力较弱或者没有时间查看文字引导信息的用户群体快速获取所需信息。当然,本公开实施例提供的交互方案也可以应用在其它具备交互需求的应用场景下,本公开对此并不具体限定。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本公开实施例的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失性的计算机可读存储介质中。基于这样的理解,本公开实施例本质上或者说本公开实施例的全部或部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开,而非对其限制,本公开的保护范围并不局限于此。尽管参照前述实施例对本公开进行了详细的说明,但是本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术内容进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术内容的本质脱离本公开的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以权利要求的保护范围为准。

Claims (20)

  1. 一种交互方法,所述方法包括:
    接收对显示设备的第一触发操作;
    获取用于响应所述第一触发操作的交互对象的动作标识;
    基于所述动作标识,控制所述显示设备展示的交互对象进行响应,所述响应中包括与所述交互对象的动作标识对应的动作。
  2. 根据权利要求1所述的方法,其中,所述获取用于响应所述第一触发操作的交互对象的动作标识,包括:
    基于对所述显示设备的触发操作与所述交互对象的动作标识之间的预设映射关系,获取与所述第一触发操作对应的所述动作标识。
  3. 根据权利要求1所述的方法,其中,所述获取用于响应所述第一触发操作的交互对象的动作标识,包括:
    获取用于响应所述第一触发操作的响应数据;
    基于对所述显示设备的触发操作的响应数据与所述交互对象的动作标识之间的预设映射关系,获取用于响应所述第一触发操作的响应数据对应的所述动作标识。
  4. 根据权利要求3所述的方法,其中,所述响应数据包括文本数据,所述预设映射关系包括所述文本数据中的关键文本数据与动作标识之间的预设映射关系。
  5. 根据权利要求1至4任一项所述的方法,其中,所述动作标识包括所述交互对象的肢体动作标识,和/或,所述交互对象在做出动作时的展示位置标识。
  6. 根据权利要求1至5任一项所述的方法,其中,所述接收对显示设备的第一触发操作,包括:
    接收对所述显示设备的目标应用程序的第一触发操作;或者,
    接收对所述显示设备的目标应用程序的目标功能选项的第一触发操作,所述目标功能选项位于所述目标应用程序的显示界面中的预设展示区域。
  7. 根据权利要求5所述的方法,其中,
    在所述动作标识包括所述交互对象的肢体动作标识的情况下,所述动作标识对应的动作包括所述交互对象指向所述显示设备的目标展示区域的肢体动作;
    在所述动作标识包括所述交互对象在做出动作时的展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出的动作;
    在所述动作标识包括所述肢体动作标识和所述展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出指向所述目标展示区域的肢体 动作。
  8. 根据权利要求1至7任一项所述的方法,其中,基于所述动作标识,控制所述显示设备展示的交互对象进行响应,包括:
    获取与所述动作标识对应的驱动数据;
    利用所述驱动数据,渲染所述交互对象的显示画面,所述显示画面中包括以下画面内容中的任一种:所述交互对象做出与肢体动作标识对应的肢体动作的画面内容;所述交互对象在展示位置标识对应的目标展示位置上做出与所述肢体动作标识对应的肢体动作的画面内容;
    控制所述显示设备显示所述交互对象的显示画面。
  9. 根据权利要求8所述的方法,其中,所述控制所述显示设备显示所述交互对象的显示画面,包括:
    控制所述显示设备在目标应用程序的显示界面的背景层上显示所述交互对象的显示画面,所述显示界面位于所述背景层之上。
  10. 根据权利要求1至9任一项所述的方法,所述方法还包括:
    获取所述第一触发操作对应的响应数据;
    控制所述显示设备播放所述响应数据中的语音数据,和/或,在所述显示设备的界面上展示所述响应数据的提示框。
  11. 一种交互***,包括:显示设备和服务器;其中,
    所述显示设备,用于获取对显示设备的第一触发操作并将所述第一触发操作发送给所述服务器,并且基于所述服务器的指示控制所述显示设备展示的交互对象进行响应;
    所述服务器,用于接收所述第一触发操作;获取用于响应所述第一触发操作的交互对象的动作标识;基于所述动作标识,指示所述显示设备控制所述交互对象进行响应,所述响应中包括与所述交互对象的动作标识对应的动作。
  12. 一种交互装置,所述装置包括:
    接收模块,用于接收对显示设备的第一触发操作;
    获取模块,用于获取用于响应所述第一触发操作的交互对象的动作标识;
    控制模块,用于基于所述动作标识,控制所述显示设备展示的交互对象进行响应,所述响应中包括与所述交互对象的动作标识对应的动作。
  13. 根据权利要求12所述的装置,其中,所述获取模块在用于获取用于响应所述第一触发操作的交互对象的动作标识时,包括:
    基于对所述显示设备的触发操作与所述交互对象的动作标识之间的预设映射关系, 获取与所述第一触发操作对应的所述动作标识。
  14. 根据权利要求12所述的装置,其中,所述获取模块在用于获取用于响应所述第一触发操作的交互对象的动作标识时,包括:
    获取用于响应所述第一触发操作的响应数据;
    基于对所述显示设备的触发操作的响应数据与所述交互对象的动作标识之间的预设映射关系,获取用于响应所述第一触发操作的响应数据对应的所述动作标识。
  15. 根据权利要求14所述的装置,其中,所述响应数据包括文本数据,所述预设映射关系包括所述文本数据中的关键文本数据与动作标识之间的预设映射关系。
  16. 根据权利要求12至15任一项所述的装置,其中,所述动作标识包括所述交互对象的肢体动作标识,和/或,所述交互对象在做出动作时的展示位置标识。
  17. 根据权利要求12至16任一项所述的装置,其中,所述接收模块在用于接收对显示设备的第一触发操作时,包括:
    接收对所述显示设备的目标应用程序的第一触发操作;或者,
    接收对所述显示设备的目标应用程序的目标功能选项的第一触发操作,所述目标功能选项位于所述目标应用程序的显示界面中的预设展示区域。
  18. 根据权利要求16所述的装置,其中,
    在所述动作标识包括所述交互对象的肢体动作标识的情况下,所述动作标识对应的动作包括所述交互对象指向所述显示设备的目标展示区域的肢体动作;
    在所述动作标识包括所述交互对象在做出动作时的展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出的动作;
    在所述动作标识包括所述肢体动作标识和所述展示位置标识的情况下,所述动作标识对应的动作包括所述交互对象在目标展示位置上做出指向所述目标展示区域的肢体动作。
  19. 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,其中,所述机器可读指令被所述处理器执行时使所述处理器执行如权利要求1至10任一项所述的交互方法。
  20. 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,其中,该计算机程序被处理器执行时使所述处理器执行如权利要求1至10任一项所述的交互方法。
PCT/CN2020/130092 2020-04-13 2020-11-19 交互方法、装置、交互***、电子设备及存储介质 WO2021208432A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020217026797A KR20210129067A (ko) 2020-04-13 2020-11-19 상호작용 방법, 장치, 상호작용 장치, 전자 장치 및 저장 매체
SG11202109187WA SG11202109187WA (en) 2020-04-13 2020-11-19 Interaction methods and apparatuses, interaction systems, electronic devices and storage media
JP2021556975A JP2022532696A (ja) 2020-04-13 2020-11-19 インタラクション方法、装置、システム、電子デバイス及び記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010285478.9 2020-04-13
CN202010285478.9A CN111488090A (zh) 2020-04-13 2020-04-13 交互方法、装置、交互***、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021208432A1 true WO2021208432A1 (zh) 2021-10-21

Family

ID=71791805

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/130092 WO2021208432A1 (zh) 2020-04-13 2020-11-19 交互方法、装置、交互***、电子设备及存储介质

Country Status (6)

Country Link
JP (1) JP2022532696A (zh)
KR (1) KR20210129067A (zh)
CN (1) CN111488090A (zh)
SG (1) SG11202109187WA (zh)
TW (1) TW202138971A (zh)
WO (1) WO2021208432A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488090A (zh) * 2020-04-13 2020-08-04 北京市商汤科技开发有限公司 交互方法、装置、交互***、电子设备及存储介质
CN113138765A (zh) * 2021-05-19 2021-07-20 北京市商汤科技开发有限公司 交互方法、装置、设备以及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116463A (zh) * 2013-01-31 2013-05-22 广东欧珀移动通信有限公司 个人数字助理应用的界面控制方法及移动终端
CN107085495A (zh) * 2017-05-23 2017-08-22 厦门幻世网络科技有限公司 一种信息展示方法、电子设备及存储介质
CN107294838A (zh) * 2017-05-24 2017-10-24 腾讯科技(深圳)有限公司 社交应用的动画生成方法、装置、***以及终端
CN108491147A (zh) * 2018-04-16 2018-09-04 青岛海信移动通信技术股份有限公司 一种基于虚拟人物的人机交互方法及移动终端
CN111488090A (zh) * 2020-04-13 2020-08-04 北京市商汤科技开发有限公司 交互方法、装置、交互***、电子设备及存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006235671A (ja) * 2005-02-22 2006-09-07 Norinaga Tsukiji 会話装置及びコンピュータ読み取り可能な記録媒体。
JP2009163631A (ja) * 2008-01-09 2009-07-23 Nippon Telegr & Teleph Corp <Ntt> バーチャルエージェント制御装置及びそのプログラム
CN105718133A (zh) * 2014-12-05 2016-06-29 珠海金山办公软件有限公司 一种引导用户操作的方法及装置
JP2017143992A (ja) * 2016-02-16 2017-08-24 株式会社トプコン 眼科検査システム及び眼科検査装置
US10685656B2 (en) * 2016-08-31 2020-06-16 Bose Corporation Accessing multiple virtual personal assistants (VPA) from a single device
CN107894833B (zh) * 2017-10-26 2021-06-15 北京光年无限科技有限公司 基于虚拟人的多模态交互处理方法及***
CN110874137B (zh) * 2018-08-31 2023-06-13 阿里巴巴集团控股有限公司 一种交互方法以及装置
CN110125932B (zh) * 2019-05-06 2024-03-19 达闼科技(北京)有限公司 一种机器人的对话交互方法、机器人及可读存储介质
CN110989900B (zh) * 2019-11-28 2021-11-05 北京市商汤科技开发有限公司 交互对象的驱动方法、装置、设备以及存储介质
CN110968194A (zh) * 2019-11-28 2020-04-07 北京市商汤科技开发有限公司 交互对象的驱动方法、装置、设备以及存储介质
CN110868635B (zh) * 2019-12-04 2021-01-12 深圳追一科技有限公司 视频处理方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116463A (zh) * 2013-01-31 2013-05-22 广东欧珀移动通信有限公司 个人数字助理应用的界面控制方法及移动终端
CN107085495A (zh) * 2017-05-23 2017-08-22 厦门幻世网络科技有限公司 一种信息展示方法、电子设备及存储介质
CN107294838A (zh) * 2017-05-24 2017-10-24 腾讯科技(深圳)有限公司 社交应用的动画生成方法、装置、***以及终端
CN108491147A (zh) * 2018-04-16 2018-09-04 青岛海信移动通信技术股份有限公司 一种基于虚拟人物的人机交互方法及移动终端
CN111488090A (zh) * 2020-04-13 2020-08-04 北京市商汤科技开发有限公司 交互方法、装置、交互***、电子设备及存储介质

Also Published As

Publication number Publication date
JP2022532696A (ja) 2022-07-19
SG11202109187WA (en) 2021-11-29
KR20210129067A (ko) 2021-10-27
TW202138971A (zh) 2021-10-16
CN111488090A (zh) 2020-08-04

Similar Documents

Publication Publication Date Title
JP7411133B2 (ja) 仮想現実ディスプレイシステム、拡張現実ディスプレイシステム、および複合現実ディスプレイシステムのためのキーボード
US11043031B2 (en) Content display property management
JP6013583B2 (ja) 有効インターフェース要素の強調のための方式
US11615592B2 (en) Side-by-side character animation from realtime 3D body motion capture
KR20210046591A (ko) 증강 현실 데이터 제시 방법, 장치, 전자 기기 및 저장 매체
CN108273265A (zh) 虚拟对象的显示方法及装置
CN111158469A (zh) 视角切换方法、装置、终端设备及存储介质
US10955929B2 (en) Artificial reality system having a digit-mapped self-haptic input method
KR20230022269A (ko) 증강 현실 데이터 제시 방법, 장치, 전자 기기 및 저장 매체
WO2021208432A1 (zh) 交互方法、装置、交互***、电子设备及存储介质
CN111771180A (zh) 增强现实环境中对象的混合放置
US11367416B1 (en) Presenting computer-generated content associated with reading content based on user interactions
CN111971714A (zh) 增强现实环境中的加载指示器
WO2019166005A1 (zh) 智能终端及其感控方法、具有存储功能的装置
CN103752010B (zh) 用于控制设备的增强现实覆盖
WO2020201998A1 (en) Transitioning between an augmented reality scene and a virtual reality representation
CN110609615A (zh) 用于在增强现实中集成触觉覆盖的***和方法
KR102587645B1 (ko) 터치스크린 제스처를 사용하여 정밀 포지셔닝하기 위한 시스템 및 방법
CN112717409B (zh) 虚拟车辆控制方法、装置、计算机设备及存储介质
US9041669B1 (en) Input/output device
US11948237B2 (en) System and method for mimicking user handwriting or other user input using an avatar
TWI799195B (zh) 利用虛擬物件實現第三人稱視角的方法與系統
US11934627B1 (en) 3D user interface with sliding cylindrical volumes
US20230410441A1 (en) Generating user interfaces displaying augmented reality graphics
CN114904279A (zh) 数据预处理方法、装置、介质及设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021556975

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20930989

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20930989

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 521430712

Country of ref document: SA