CN115933929A - Online interaction method, device, equipment and storage medium - Google Patents

Online interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN115933929A
CN115933929A CN202211424539.0A CN202211424539A CN115933929A CN 115933929 A CN115933929 A CN 115933929A CN 202211424539 A CN202211424539 A CN 202211424539A CN 115933929 A CN115933929 A CN 115933929A
Authority
CN
China
Prior art keywords
user
target object
scene
information
rating information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211424539.0A
Other languages
Chinese (zh)
Inventor
刘景玉
李秋婷
王美伊
陈怡�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211424539.0A priority Critical patent/CN115933929A/en
Publication of CN115933929A publication Critical patent/CN115933929A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure provides an online interaction method. According to the methods described herein, one or more images in a scene are captured with a camera of a user device, and information associated with a target object in the scene is presented while the scene is presented via the user device, wherein the target object is identified based on at least one of the captured one or more images and the information includes at least user rating information associated with the target object. In this way, the user can conveniently and quickly realize online interaction in the scene online, interaction interestingness is enhanced, and interaction experience of the user is improved.

Description

Online interaction method, device, equipment and storage medium
Technical Field
Example embodiments of the present disclosure relate generally to Augmented Reality (AR), and more particularly, to a method, apparatus, device, and computer-readable storage medium for interacting in an AR scene.
Background
In daily life, people often participate in offline shows and often find new restaurants or new buildings, etc., in which scenarios it is more desirable for people to communicate with other users. In some implementations, a user needs to know a target object that the user desires to interact with in advance, and search for and find the target object on the social software with an interaction function, in this way, add the user's comment on the target object or view comments of other users on the target object. The realization needs that the user knows the information of the target object expected to be interacted in advance, and the interaction process is not timely and convenient enough, so that the interaction requirement of the user cannot be met.
Disclosure of Invention
In a first aspect of the disclosure, a method of online interaction is provided. The method comprises the following steps: capturing one or more images in a scene with a camera of a user device; and while presenting the scene via the user device, presenting information associated with a target object in the scene, the target object identified based on at least one of the one or more captured images, the information including at least user rating information associated with the target object.
In a second aspect of the present disclosure, an online interaction device is provided. The device includes: an image capture module configured to capture one or more images in a scene with a camera of a user device; and a presentation module configured to present information associated with a target object in the scene, the target object being identified based on at least one of the one or more captured images, while presenting the scene via the user device, the information including at least user rating information associated with the target object.
In a third aspect of the disclosure, an electronic device is provided. The electronic device comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by the at least one processing unit, cause the electronic device to perform a method according to the first aspect of the present disclosure.
In a fourth aspect of the disclosure, a computer-readable storage medium is provided. The computer readable storage medium has stored thereon a computer program executable by a processor to perform the method according to the first aspect of the present disclosure.
It should be understood that the statements herein set forth in this summary are not intended to limit the essential or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages, and aspects of various implementations of the present disclosure will become more apparent hereinafter with reference to the following detailed description in conjunction with the accompanying drawings. In the drawings, like or similar reference characters denote like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow diagram of a method of online interaction, in accordance with some embodiments of the present disclosure;
FIGS. 3A and 3B illustrate a flow chart of a method of online interaction according to some embodiments of the present disclosure;
FIG. 4A illustrates a schematic diagram of receiving first user rating information in an AR scenario;
FIG. 4B shows a schematic diagram of dynamic changes of virtual objects in an AR scene;
fig. 5 illustrates a block diagram of an apparatus for augmented reality according to some embodiments of the present disclosure; and
FIG. 6 illustrates a block diagram of a device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding thereof. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
The term "responsive to" means that the corresponding event occurs or the condition is satisfied. It will be appreciated that the timing of the performance of the subsequent action performed in response to the event or condition, and the time at which the event occurred or the condition was satisfied, are not necessarily strongly correlated. In some cases, follow-up actions may be performed immediately upon the occurrence of an event or the satisfaction of a condition; in other cases, follow-up actions may also be performed after a period of time has elapsed after an event occurred or a condition has been met.
It will be appreciated that the data referred to in this disclosure, including but not limited to the data itself, the data obtained or used, should comply with applicable legal regulations and related requirements.
It is understood that, before the technical solutions disclosed in the embodiments of the present disclosure are used, the user should be informed of the type, the use range, the use scene, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant laws and regulations and obtain the authorization of the user.
For example, when responding to the reception of an active request of a user, prompt information is sent to the user to explicitly prompt the user that the operation requested to be performed will require obtaining and using personal information to the user, so that the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the prompt information is sent to the user, for example, a pop-up window may be used, and the prompt information may be presented in text in the pop-up window. In addition, a selection control for providing personal information to the electronic equipment by the user's selection of "agree" or "disagree" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and is not intended to limit the implementation of the present disclosure, and other ways of satisfying the relevant laws and regulations may be applied to the implementation of the present disclosure.
As discussed briefly above, people often participate in offline shows, find new restaurants or new buildings, etc., in which scenarios it is more desirable to communicate ideas with other users. As an example scenario, a user attends an offline artwork exhibition. During the viewing experience, the user wishes to communicate the view of the artwork with other users in a timely and convenient manner. But users who participate in the art exhibition may not know each other and therefore cannot gain other users' view of the art.
In some implementations, a user may use social software with interactive functionality to enable interaction with other users. Specifically, the user opens the social software with the interaction function to search the corresponding artwork to see some viewpoints of the artwork by other users, and can post own viewpoints on the webpage. Then, the realization requires the user to know the artwork to be interacted in advance, the operation process is tedious, and the interaction interest and the interaction experience of the user are reduced.
In recent years, AR technology has gradually entered people's lives. The AR technology is a technology that fuses virtual information with the real world. In applying AR technology, an AR device may present virtual objects in an AR scene superimposed with a picture in the real world. In this way, the image appearing in the user field of view includes both the picture of the real world and the virtual object, so that the user can see the virtual object and the real world at the same time. Therefore, the wide application of the AR technology greatly improves the interactive experience of people.
The embodiment of the disclosure provides an online interaction method. According to the method, a user captures one or more images in a scene with a camera of a user device such that a target object in the scene may be identified based on at least one of the one or more captured images. Further, at the user's user device, information associated with a target object in the scene is presented while the scene is presented, wherein the information includes at least user rating information associated with the target object.
In this way, the user does not need to know and search for the corresponding target object in the social software in advance, and only needs to capture one or more images in the scene through the camera to obtain the information of the target object in the scene. In particular, according to embodiments of the present disclosure, information associated with a target object in a scene is presented while the scene is presented, wherein the information includes at least user rating information associated with the target object. In this way, the user only needs to open the camera, and can realize the interaction with other users, and the interest and the rapidity of the interaction are improved.
In some embodiments below, discussion will be made with the user attending an offline exhibition as an example scenario, and with the artwork on the exhibition as an example target object. It should be understood, however, that the above example scenarios and example target objects should not be construed as limitations of the present disclosure. For example, when a user discovers a new building, the building may be the target object and the physical environment in which the building is located may be the example scenario. In other words, embodiments of the present disclosure are applicable to any scene of online interaction, and the specific scene and target object may be changed in other embodiments. Various example embodiments of the disclosure will now be described with reference to the accompanying drawings.
Example Environment
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure can be implemented. The example environment 100 includes users 130-1, 130-2, 130-3 and their respective user devices 110-1, 110-2, 110-3. For ease of discussion, users 130-1, 130-2, 130-3 may be referred to collectively or individually as users 130, and user devices 110-1, 110-2, 110-3 may be referred to collectively or individually as user devices 110.
In this example environment 100, an AR scene 150 is presented to a user 130 at or by a user device 110-1. The AR scene 150 may be presented on a screen of the user device 110-1. The AR scene 150 may include a real-world picture 155 and a virtual object 152 superimposed on the picture 155. In some embodiments, the user device 110 includes a camera 140, and the real-world frame 155 is generated based on respective images captured by the camera 140 of the user device 110.
In the particular implementation of fig. 1, object 151 in picture 155 of the real world is a representation of a real object in the real world in AR scene 150, object 151 also sometimes referred to as target object 151. When the user is engaged in an AR experience, screen 155 may change as the position and/or perspective of user device 110 changes, and accordingly, target object 151 in screen 155 may change.
The environment 100 further includes a remote device 180, and the user device 110 may communicate with the remote device 180. In some embodiments, the remote device 180 may be a cloud server.
It should be understood that AR scene 150 is merely exemplary and is not intended to limit the scope of the present disclosure. The AR scene 150 may include more or fewer virtual objects superimposed on the screen 155, or may include other elements, such as User Interface (UI) elements.
The user equipment 110 may be any type of mobile terminal, fixed terminal, or portable terminal including a mobile phone, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, gaming device, wearable device, personal Communication Systems (PCS) device, personal navigation device, personal Digital Assistant (PDA), audio/video player, digital camera/camcorder, positioning device, television receiver, radio broadcast receiver, electronic book device, or any combination of the preceding, including accessories and peripherals of these devices, or any combination thereof. In some embodiments, user device 110 can also support any type of interface to the user (such as "wearable" circuitry, etc.).
Additionally, in some embodiments, user device 110 may have AR engine 120 installed thereon. AR engine 120 is used to drive the rendering of AR scene 150. In some embodiments, AR engine 120 may be an AR game engine; accordingly, the AR scene 150 may be an AR game scene. In some embodiments, AR engine 120 may be part of a content sharing class application (or "social class application"). The social application can provide services to the user 130 related to multimedia content consumption, such as allowing the user to publish, view, comment, forward, author multimedia works, and so forth. Accordingly, the AR scene 150 may be an AR content authoring scene. For example, in some embodiments, the AR scene 150 may be part of a special effect provided by a social application.
It should be understood that the description of the structure and function of environment 100 is for exemplary purposes only and does not imply any limitation as to the scope of the disclosure. User device 110 may include any suitable structure and functionality to enable interaction with an AR scene.
Example procedure
Fig. 2 illustrates a flow diagram of a method 200 of online interaction, in accordance with some embodiments of the present disclosure. In some embodiments, method 200 may be implemented, for example, at user device 110-1 shown in fig. 1, for example, by AR engine 120 or other suitable module/means.
For ease of understanding, method 200 will be described in connection with user 130-1 visiting this particular example scenario for offline exhibition. As expressly noted above, this example scenario should not be construed as limiting the scope of the embodiments of the present disclosure.
At block 210, user device 110-1 captures one or more images in a scene with camera 140 of user device 110-1. As a particular example, user 130-1 opens social software that supports AR presentation functionality via user device 110-1 and captures images of the surrounding scene with camera 140. In some embodiments, the social software may include a sweep function, and the user 130-1 triggers the turning on of the camera 140 by selecting or clicking on an interface element corresponding to the sweep function.
In some embodiments, the camera 140 may automatically complete the capturing of one or more images without any additional action by the user 130-1. In other words, the operation of user device 110-1 to capture one or more images in a scene with camera 140 of user device 110-1 may not be perceived by user 130-1. In this way, the triggering process of image capture is optimized, reducing unnecessary operations by the user 130-1.
At block 220, the user device 110-1, while presenting the scene via the user device 110-1, also presents information associated with a target object 151 in the scene, wherein the target object 151 is identified based on at least one of the captured one or more images, and the information associated with the target object 151 includes at least user rating information for the target object 151.
As a particular embodiment, user 130-1 participates in an offline exhibition, and user 130-1 generates an AR picture for the offline exhibition scene via their user device 110-1. Further, the offline exhibition includes an artwork, i.e., the target object 151. At the user device 110-1, information associated with the artwork is presented in the AR scene 150 and includes user rating information associated with the artwork.
In some embodiments, user device 110-1 may interact with remote device 180 in order to identify the target object 151. Specifically, user device 110-1 transmits at least one of the one or more images to remote device 180 such that remote device 180 may identify target object 151 based on the at least one image. Further, user device 110-1 receives the identified information associated with target object 151 from remote device 180 for display.
In some embodiments, the information associated with the target object 151 also includes identification information or introductory information about the target object 151. In this way, user 130-1 need not know and search for target object 151 in advance, only need to turn on camera 140/open social software that supports AR presentation to obtain information associated with target object 151.
In some embodiments, the remote device 180 has pre-stored therein information of the target object 151, including, but not limited to, identification information of the target object 151 and image characteristic parameters of the target object 151. After remote device 180 receives at least one image transmitted by user device 110-1, remote device 180 may determine target object 151 based on image recognition techniques and pre-stored information, e.g., determine identification information of target object 151.
As a particular embodiment, the user device 110-1 captures a plurality of images of an artwork using the camera 140 and uploads at least a portion of the images of the plurality of images to the remote device 180, and the remote device 180 recognizes the artwork based on the received images and further obtains identification information of the artwork.
In some embodiments, the remote device 180 may obtain introduction information of the target object 151, user rating information for the target object 151, and the like based on the identification information of the target object 151. Further, in some embodiments, remote device 180 may send the above information to user device 110-1 for display.
In some embodiments, the user rating information includes comments for the target object 151. Alternatively or additionally, in some embodiments, the user rating information includes praise for the target object 151. In some embodiments, the user device 110-1 may also obtain user rating information for the target object 151 via the remote device 180. In some embodiments, the remote device 180 obtains user rating information associated with the target object 151 via identification information of the target object 151.
By presenting the like information, the user can be quickly given an overall evaluation of the target object 151 by other users. Alternatively or additionally, by presenting comment information, the user may be made to obtain more detailed user rating information for the target object 151 by other users. By presenting diversified rating information, the user can quickly obtain the user rating information associated with the target object 151 in accordance with personal habits and practical needs.
In some embodiments, the user rating information includes first user rating information for the target object 151 by the current user 130-1. Alternatively or additionally, in some embodiments, the user rating information comprises second user rating information for the target object 151 by at least one other user.
In this way, the user may obtain user rating information for the target object 151 for himself and/or other users in the presented scene by simply turning on the camera without having to open a web page and search for the target object.
Alternatively, in some embodiments, the user rating information associated with the target object 151 may not be specific to the target object 151. Specifically, the user rating information associated with the target object 151 may also include, but is not limited to, a message, an insight, a viewpoint, a wish, and the like of the user. In this way, the content and the form of the interactive information are more diversified, and the interactive experience of the user is further improved.
In this manner, the user 130-1 need only turn on the camera 140 to capture one or more images in the scene, and can obtain user ratings information for the target object 151 in the scene in a timely and quick manner. Further, by presenting the AR scene and the user evaluation information of the target object 151 at the same time, the display of the interactive information is more interesting and intuitive.
For ease of understanding only, an exemplary method 300 of online interaction is described with reference to FIG. 3A. At block 305, user device 110-1 obtains at least one image. In particular, user device 110-1 captures one or more images in a scene through camera 140.
At block 310, a target object 151 is determined based on the acquired at least one image. In some embodiments, user 130-1 determines target object 151 in the scene based on a local search. Alternatively, user device 110-1 uploads some or all of the captured images to remote device 180 to enable a search for target object 151. Thereby enabling identification of the target object 151.
At block 315, based on the determined target object 151, user rating information corresponding to the target object 151 is obtained. In some embodiments, user rating information for the target object 151, such as comments and likes information for the target object 151, may be obtained at the remote device 180 based on the search results.
At block 320, an AR presentation is conducted at user device 110-1. Specifically, information associated with the target object 151 is displayed in superimposition with the real screen 155. For example, information associated with the target object 151 is represented as an AR object, which is displayed superimposed on the screen 155 of the real world. Additionally, in some embodiments, information associated with the target object 151 is dynamically updated by the remote device 180. In particular, the remote device 180 obtains comments and likes for the target object 151 by the current user 130-1 and at least one other user 130 in real-time and dynamically updates the user rating information presented at the user device 110-1 based on the obtained comments and likes.
In some embodiments, remote device 180 may provide the most recent user comment information to user device 110-1 for display. Alternatively, in other embodiments, remote device 180 may provide a portion of the user comment to user device 110-1 for display, e.g., a high quality comment screened out according to a predetermined rule.
Further, remote device 180 may also provide respective user information corresponding to the comments to user device 110-1 for display at the same time. In this way, the current user may further obtain user information corresponding to the comment. Based on the user information, the current user may trigger further interaction with the user through a predetermined action.
In some embodiments, user 130-1 may interact online in the presented AR scene 150. Specifically, the user device 110-1 receives first user rating information for a target object 151 in a scene for a current user 130-1 and presents the first user rating information while presenting the scene. Additionally, in some embodiments, user device 110-1 captures a gesture representing first user rating information for current user 130-1 with camera 140.
As a particular example, when user 130-1 reaches out and makes it appear in AR scene 150, then user 130-1 is deemed to desire to perform an interactive action. Further, in some embodiments, the user 130-1 may further perform actions such as raising a thumb, virtually clicking, or virtually holding up an object. It should be understood that the present disclosure is not limited to the particular gesture that represents the first user rating information.
The user 130-1 may achieve an evaluation of the target object 151 without leaving the camera 140 interface, and by merely operating in the AR scene 150 through predefined gestures. Further, the evaluation of the target object 151 by the user is triggered or completed through the gesture, so that the interaction process is simplified, and the interaction interest is improved.
In some embodiments, the user rating information is displayed as follows: causing the corresponding augmented reality AR object to be displayed in the scene in the form of an animation. As a particular example, a virtual object 152 may be presented for interaction in the AR scene 150. Referring to FIG. 4A, a diagram 400 of a gesture capturing information representative of a first user rating in an AR scene 150 is shown. Specifically, in response to detecting the hand of the user 130-1 in the AR scene 150, a virtual object 152 is displayed superimposed in the AR scene 150, as illustrated in fig. 4 as a love heart. Further, the virtual object 152 may move according to a predetermined trajectory, for example, fly from top to bottom. The user 130-1 may contact/click the virtual object 152 with a finger, or make an action to contact the virtual object 152 under the virtual object 152, or the like. Based on the gesture, user device 110-1 determines rating information, e.g., likes, of user 130-1 for the target object 151.
Alternatively, as another particular embodiment, the user 130-1 may indicate that he desires to perform an interactive action by clicking on the interface/screen. In this case, the action of clicking on the interface/screen may be recognized as either a positive action or as a triggering event that triggers the display of the virtual object 152.
In some embodiments, the display state of the user rating information may be dynamically changed according to the number of interactions. Referring to fig. 4B, a schematic diagram 450 is shown in which the virtual objects 152 in the AR scene 150 dynamically change as the user rates information. As shown in fig. 4B, the virtual object 152 changes as the number of praise changes. It should be understood that fig. 4B is only a schematic diagram illustrating that the virtual object 152 dynamically changes with the user rating information, and in other embodiments, the correspondence between the virtual object 152 and the user rating information may be configured according to other suitable rules. In this way, the display of the user evaluation information is more interesting.
In some embodiments, the user rating information is displayed as follows: the corresponding augmented reality AR object is caused to be displayed at a predetermined position relative to the target object 151, i.e. the virtual object 152 is displayed at a predetermined position. For example, when the target object 151 is a work of art, an AR object corresponding to user evaluation information is displayed above or below the work of art, or on the left or right side of the work of art. When the target object 151 is a building, an AR object corresponding to the user evaluation information is displayed above or below the building, or on the left or right side of the building.
Further, AR objects corresponding to the user rating information may be dynamically presented in the AR scene 150 according to a predetermined trajectory. For example, the AR object corresponding to the user evaluation information may drift from above the screen to above or below the target object 151, or to the left or right of the target object 151.
In this way, the AR object corresponding to the user evaluation information may be visually closely associated with the corresponding target object 151, so that the display of the interactive information is more interesting and intuitive.
For ease of understanding only, an exemplary method 350 of online interaction is described with reference to FIG. 3B. At block 355, the AR scene 150 is rendered at the user device 110-1 and the 3D coordinates of the target object 151 in the user device 110-1 coordinate system are determined. At block 360, user device 110-1 obtains an interaction gesture for generating user rating information. At block 365, a trajectory animation of the user rating information is determined based on at least one of the gesture trigger position and the 3D coordinates of the target object 151. In this way, the AR object corresponding to the user evaluation information may move in the AR scene 150 according to a predetermined trajectory, enhancing the interest of the interaction.
In some embodiments, to better show the spatial position of the target object 151 in the AR scene 150 and/or the spatial position of the corresponding AR object in the AR scene, it is necessary to determine the coordinate information of the target object 151 in the 3D coordinate system of the camera 140.
In some embodiments, 2D coordinates of the target object 151 are obtained based on at least some of the images captured by the camera 140, and the 2D coordinates may be mapped to 3D coordinates of the camera 140 by Simultaneous Localization and Mapping (SLAM). In this manner, user device 110-1 may determine the 3D coordinates of target object 151 without scanning the target object in advance. It should be understood that the SLAM algorithm is merely an example of mapping 2D coordinates to 3D coordinates. In other embodiments, the conversion from 2D coordinates to 3D coordinates may be accomplished using other existing, or future proposed, algorithms.
Alternatively, in other embodiments, the target object 151 may need to be 3D scanned in advance to obtain a 3D point cloud model of the target object 151. Further, the point cloud model may be stored at the user device 110-1 or the remote device 180. In the AR presentation process, the user device 110-1 may obtain a 3D point cloud model, and compare the feature points of the target object 151 extracted in real time with the pre-generated 3D point cloud, thereby determining the position of the target object 151 in the 3D coordinate system of the camera 140. By 3D scanning the target object 151 in advance, the determined position of the target object 151 in the camera 3D coordinate system will be more accurate.
Additionally, based on the position of the target object 151 in the camera 3D coordinate system, the presentation position and/or the predetermined trajectory of the AR object corresponding to the user evaluation information may be further accurately determined. For example, the AR object corresponding to the user rating information may be accurately presented/flown above or below the target object 151, or to the left or right of the target object 151.
Briefly, in accordance with the above-described embodiments of the present disclosure, user 130-1 may enable timely and convenient online interaction via AR scene 150 presented by user device 110-1. Specifically, user 130-1 need only turn on camera 140 to identify target object 151 in AR scene 150. Further, user rating information associated with the target object 151 may also be presented concurrently in the AR scene 150. Further, the user 130-1 need not leave the camera 140 interface, but need only operate in the AR scene 150 through predefined gestures to achieve evaluation of the target object 151. Further, according to some embodiments of the present disclosure, the user rating information may be displayed in an AR scene 150 in an overlaid manner in an AR object, and a display position and/or a motion trajectory with the AR object may be determined according to a position of the target object 151.
Therefore, the method realizes the fusion of the online virtual content and the offline real content, simplifies the interaction process and improves the interestingness of the user 130-1 interaction process.
Example apparatus and devices
Fig. 5 shows a block diagram of an apparatus 500 for AR interaction. As shown, the apparatus 500 includes an image capture module 510 configured to capture one or more images in a scene with a camera of a user device; and a presentation module 520 configured to present information associated with a target object 151 in the scene while presenting the scene via the user device 110, the target object 151 being identified based on at least one of the captured one or more images, the information comprising at least user rating information associated with the target object 151.
In some embodiments, the user rating information associated with the target object 151 includes comments directed to the target object 151. Alternatively or additionally, in some embodiments, the user rating information includes praise for the target object 151.
In some embodiments, the user rating information associated with the target object 151 includes first user rating information for the target object 151 by the current user. Alternatively or additionally, in some embodiments, the user rating information comprises second user rating information for the target object 151 by at least one other user.
In some embodiments, the apparatus 500 further comprises: a first user rating information receiving module (not shown) configured to: first user rating information for a target object 151 in the scene is received via the user device 110 by the current user. The presentation module 520 is further configured to: presenting the first user rating information while presenting the scene.
In some embodiments, the first user rating information receiving module is further configured to: a gesture representing first user rating information for a current user is captured with the camera 140.
In some embodiments, the user rating information is displayed as follows: causing the corresponding augmented reality AR object to be displayed in the scene in the form of an animation.
In some embodiments, the user rating information is displayed as follows: causing the corresponding augmented reality AR object to be displayed at a predetermined position relative to the target object 151.
In some embodiments, the information further includes at least one of identification information and introduction information related to the target object 151.
In some embodiments, the apparatus 500 further comprises an image transmission module (not shown) configured to transmit at least one of the one or more images to the remote device 180, such that the remote device 180 identifies the target object 151 based on the at least one image; and an identification information receiving module (not shown) configured to: the identified information associated with the target object 151 is received from the remote device 180 for display.
In some embodiments, the information is dynamically updated by the remote device 180.
The elements included in apparatus 500 may be implemented in a variety of ways including software, hardware, firmware, or any combination thereof. In some embodiments, one or more of the units may be implemented using software and/or firmware, such as machine executable instructions stored on a storage medium. In addition to, or in the alternative to, machine-executable instructions, some or all of the elements in apparatus 500 may be implemented at least in part by one or more hardware logic components. By way of example, and not limitation, exemplary types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standards (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and so forth.
FIG. 6 illustrates a block diagram of a computing device 600 in which one or more embodiments of the disclosure may be implemented. It should be understood that the computing device 600 illustrated in FIG. 6 is merely exemplary and should not be construed as limiting in any way the functionality and scope of the embodiments described herein. The computing device 600 illustrated in fig. 6 may be used to implement the user device 110 of fig. 1.
As shown in fig. 6, computing device 600 is in the form of a general-purpose electronic device. Components of computing device 600 may include, but are not limited to, one or more processors or processing units 610, memory 620, storage 630, one or more communication units 640, one or more input devices 650, and one or more output devices 660. The processing unit 610 may be a real or virtual processor and can perform various processes according to programs stored in the memory 620. In a multi-processor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capabilities of computing device 600.
Computing device 600 typically includes a number of computer storage media. Such media may be any available media that is accessible by computing device 600 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. The memory 620 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage device 630 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, a magnetic disk, or any other medium that may be capable of being used to store information and/or data (e.g., training data for training) and that may be accessed within computing device 600.
Computing device 600 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 620 may include a computer program product 625 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
The communication unit 640 enables communication with other electronic devices through a communication medium. Additionally, the functionality of the components of computing device 600 may be implemented in a single computing cluster or multiple computing machines, which are capable of communicating over a communications connection. Thus, the computing device 600 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
Input device 650 may be one or more input devices such as a mouse, keyboard, trackball, or the like. Output device 660 may be one or more output devices such as a display, speakers, printer, or the like. Computing device 600 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., communication devices with one or more devices that enable a user to interact with computing device 600, or communication with any devices (e.g., network cards, modems, etc.) that enable computing device 600 to communicate with one or more other electronic devices, as desired, via communication unit 640. Such communication may be performed via input/output (I/O) interfaces (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium is provided, on which one or more computer instructions are stored, wherein the one or more computer instructions are executed by a processor to implement the above-described method.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the implementations disclosed herein.

Claims (15)

1. A method of online interaction, comprising:
capturing one or more images in a scene with a camera of a user device; and
presenting, while presenting the scene via the user device, information associated with a target object in the scene, the target object identified based on at least one of the one or more captured images, the information including at least user rating information associated with the target object.
2. The method of claim 1, wherein the user-rating information associated with the target object comprises at least one of: a comment for the target object, and an endorsement for the target object.
3. The method of claim 1, wherein the user-rated information associated with the target object includes at least one of:
first user evaluation information of the current user for the target object, an
Second user rating information for the target object by at least one other user.
4. The method of claim 3, further comprising:
receiving, via the user device, the first user rating information for the current user for the target object in the scene; and
presenting the first user rating information while presenting the scene.
5. The method of claim 4, wherein receiving the first user rating information for the current user comprises:
capturing, with the camera, a gesture representing the first user rating information for the current user.
6. The method of claim 1, wherein the user rating information is displayed as follows: causing a corresponding augmented reality AR object to be displayed in the scene in an animated form.
7. The method of claim 1, wherein the user rating information is displayed as follows: causing a corresponding augmented reality AR object to be displayed at a predetermined location relative to the target object.
8. The method of claim 1, wherein the information further comprises at least one of identification information and introduction information related to the target object.
9. The method of claim 1, further comprising:
transmitting the at least one of the one or more images to a remote device such that the remote device identifies the target object based on the at least one image; and
receiving, from the remote device, the identified information associated with the target object for display.
10. The method of claim 9, wherein the information is dynamically updated by the remote device.
11. An online interaction device, comprising:
an image capture module configured to capture one or more images in a scene with a camera of a user device; and
a presentation module configured to present information associated with a target object in the scene, the target object being identified based on at least one of the one or more captured images, while presenting the scene via the user device, the information including at least user rating information associated with the target object.
12. The apparatus of claim 11, wherein the user-rating information associated with the target object comprises at least one of: a comment for the target object, and an approval for the target object.
13. The apparatus of claim 11, wherein the user rating information associated with the target object is displayed as follows: causing a corresponding augmented reality AR object to be displayed in the scene in an animated form.
14. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit causing the apparatus to perform the method of any of claims 1-10.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 10.
CN202211424539.0A 2022-11-14 2022-11-14 Online interaction method, device, equipment and storage medium Pending CN115933929A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211424539.0A CN115933929A (en) 2022-11-14 2022-11-14 Online interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211424539.0A CN115933929A (en) 2022-11-14 2022-11-14 Online interaction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115933929A true CN115933929A (en) 2023-04-07

Family

ID=86651476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211424539.0A Pending CN115933929A (en) 2022-11-14 2022-11-14 Online interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115933929A (en)

Similar Documents

Publication Publication Date Title
US11704878B2 (en) Surface aware lens
EP3129871B1 (en) Generating a screenshot
CN112243583B (en) Multi-endpoint mixed reality conference
US20220383577A1 (en) Personalized avatar real-time motion capture
US11615592B2 (en) Side-by-side character animation from realtime 3D body motion capture
US11763481B2 (en) Mirror-based augmented reality experience
US20230300292A1 (en) Providing shared augmented reality environments within video calls
TWI713327B (en) Message sending method and device and electronic equipment
CN113705520A (en) Motion capture method and device and server
CN111259183A (en) Image recognizing method and device, electronic equipment and medium
US11704626B2 (en) Relocation of content item to motion picture sequences at multiple devices
WO2024051540A1 (en) Special effect processing method and apparatus, electronic device, and storage medium
CN117201883A (en) Method, apparatus, device and storage medium for image editing
CN108241746B (en) Method and device for realizing visual public welfare activities
CN110089076B (en) Method and device for realizing information interaction
CN115617221A (en) Presentation method, apparatus, device and storage medium
CN115933929A (en) Online interaction method, device, equipment and storage medium
US20220124063A1 (en) Method and device for providing location based avatar messenger service
CN115454250A (en) Method, apparatus, device and storage medium for augmented reality interaction
CN112468865B (en) Video processing method, VR terminal and computer readable storage medium
CN114866835A (en) Bullet screen display method, bullet screen display device and electronic equipment
US20230050068A1 (en) Displaying a profile from a content feed within a messaging system
US20220342525A1 (en) Pushing device and method of media resource, electronic device and storage medium
CN116366907A (en) Interface interaction method, device, equipment and storage medium
KR20220123900A (en) System for providing immersive contents and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination