CN113359985A - Data display method and device, computer equipment and storage medium - Google Patents

Data display method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113359985A
CN113359985A CN202110620107.6A CN202110620107A CN113359985A CN 113359985 A CN113359985 A CN 113359985A CN 202110620107 A CN202110620107 A CN 202110620107A CN 113359985 A CN113359985 A CN 113359985A
Authority
CN
China
Prior art keywords
special effect
effect
determining
identifier
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110620107.6A
Other languages
Chinese (zh)
Inventor
田真
李斌
欧华富
王婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110620107.6A priority Critical patent/CN113359985A/en
Publication of CN113359985A publication Critical patent/CN113359985A/en
Priority to PCT/CN2021/133452 priority patent/WO2022252518A1/en
Priority to TW111107480A priority patent/TW202248961A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a data presentation method, apparatus, computer device and storage medium, wherein the method comprises: acquiring a live-action image which is acquired by augmented reality AR equipment and contains a target bottle body; detecting a first mark corresponding to the target bottle body and a second mark corresponding to the entity object except the target bottle body in the live-action image; determining a third AR special effect obtained by fusing the first AR special effect and the second AR special effect based on the first AR special effect corresponding to the first identification and the second AR special effect corresponding to the second identification; and displaying the third AR special effect.

Description

Data display method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to a data display method and apparatus, a computer device, and a storage medium.
Background
Augmented Reality (AR) technology is also called Augmented Reality, and is a relatively new technical content that promotes integration between real world information and virtual world information content, so that a real environment and a virtual object are presented on the same screen or space in real time. In recent years, with the rapid development of intelligent terminal devices, augmented reality AR devices are increasingly widely used, for example, AR creative products. In the existing AR technology, a single object may be identified and the virtual content of the single object is triggered to be displayed, and the virtual content of the object is usually a fixed virtual content such as an object introduction of the object. Therefore, the application scenario of the existing AR technology is single, and cannot meet the increasingly rich demands of users.
Disclosure of Invention
The embodiment of the disclosure at least provides a data display method, a data display device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a data display method, including: acquiring a live-action image which is acquired by augmented reality AR equipment and contains a target bottle body; detecting a first mark corresponding to the target bottle body and a second mark corresponding to a solid object except the target bottle body in the live-action image; determining a third AR special effect obtained after the first AR special effect and the second AR special effect are fused based on the first AR special effect corresponding to the first identification and the second AR special effect corresponding to the second identification; and displaying the third AR special effect.
In the embodiment of the disclosure, a first identifier corresponding to a target bottle body is detected in a live-action image, and a second identifier corresponding to a physical object except the target bottle body is detected; and determining a third AR special effect formed by fusing the first AR special effect corresponding to the first identifier and the second AR special effect corresponding to the second identifier, so that different AR special effects can be triggered and displayed when the target bottle body is combined with other entity articles, triggering conditions of the AR special effects can be enriched through the processing mode, and AR interaction between a user and the bottle body can be realized. When increasing user's interest, can improve the user and experience the use of AR technique, satisfy user's diversified user demand to can also extensively promote the relevant information of bottle.
In an optional implementation manner, the determining, based on a first AR special effect corresponding to the first identifier and a second AR special effect corresponding to the second identifier, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect includes: searching a special effect library for a fusion AR special effect matched with the first AR special effect and the second AR special effect, wherein the special effect library comprises special effect information of the fusion AR special effects of the AR special effects corresponding to the multiple identifications; and determining the searched fusion AR special effect as the third AR special effect.
In the embodiment, the mode of determining the third AR special effect according to the determined fusion special effect of the first AR special effect and the second AR special effect can be used for triggering and displaying different AR special effects when the target bottle body is combined with other non-bottle bodies, the triggering conditions of the AR special effects can be enriched through the processing mode, AR interaction between a user and the target bottle body can be realized, and therefore the interestingness of the user in the using process is increased.
In an optional implementation manner, the determining, based on a first AR special effect corresponding to the first identifier and a second AR special effect corresponding to the second identifier, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect further includes: and under the condition that the fusion AR special effect is not found in the special effect library, carrying out special effect fusion on the first AR special effect and the second AR special effect according to a preset fusion mode, and obtaining the third AR special effect after fusion.
In the embodiment, the first AR special effect and the second AR special effect are subjected to special effect fusion to obtain the third AR special effect under the condition that the fusion AR special effect is not found, so that the special effect content in the special effect library can be enriched, a richer special effect is shown for a user, and the use experience of the user is improved.
In an optional embodiment, the determining a third AR special effect after the first AR special effect and the second AR special effect are fused includes: identifying type information of a target object in the live-action image, wherein the target object comprises: the target vial, and/or the physical object; and determining a third AR special effect after the first AR special effect and the second AR special effect are fused according to the type information.
In the above embodiment, by identifying the type information of the target object in the live-action image and then determining the mode of the third AR special effect after the first AR special effect and the second AR special effect are fused according to the type information, the third AR special effect containing different elements can be triggered and displayed for bottles of different types and/or entity articles of different types, so that the special effect types of the AR special effects and the triggering mode of the AR special effects are enriched, the interest of the AR technology is improved, and meanwhile, a user can better know the related content of the target bottle, so that the target bottle can be popularized.
In an optional embodiment, the acquiring a live-action image including a target bottle body acquired by an augmented reality AR device includes: loading and displaying the H5 page on a display interface of the AR device in response to a loading request of an H5 page, wherein the H5 page contains prompt information for prompting a user to jump to a special effect display page; responding to the triggering operation of the prompt message in the H5 page, jumping to the special effect display page, acquiring the live-action image, and displaying the live-action image on the special effect display page.
In the above embodiment, by loading the H5 page and executing the corresponding trigger operation on the prompt information in the H5 page, the method of skipping to the special effect display page to display the live-action image can display the AR special effect of the target bottle without downloading the corresponding client, thereby saving the device memory of the AR device, simplifying the user operation, and improving the user experience of the user using the data display method provided by the present disclosure.
In an optional embodiment, the determining a third AR special effect after the first AR special effect and the second AR special effect are fused includes: acquiring user information of a user to which the AR equipment belongs; wherein the user information comprises at least one of: the basic attribute of the user, the equipment type of the AR equipment, the access times of the user to access the special effect display page and the historical AR special effect pushed for the user; determining a fusion special effect type matched with the user information; and determining a third AR special effect after the first AR special effect and the second AR special effect are fused according to the type of the fused special effect.
In the above embodiment, by determining at least one fusion special effect type matched with the user information and determining a third AR special effect in the at least one fusion special effect type, different users can customize AR special effects of different styles individually, so as to improve the user experience of the users.
In an optional embodiment, the determining a type of the fusion special effect matching with the user information includes: determining a history special effect type of a target history AR special effect which is pushed for the user at the latest time in the history AR special effects under the condition that the user information contains the history AR special effects; determining the fused special effect type in other special effect types except the historical special effect type in a plurality of preset special effect types.
In the above embodiment, by the processing method, the same AR special effect can be prevented from being continuously pushed to the user, so that the user experience can be improved.
In an alternative embodiment, the detecting a first marker corresponding to the target bottle and a second marker corresponding to a physical object other than the target bottle in the live-action image includes: under the condition that the number of bottles contained in the live-action image is multiple, responding to a selection instruction aiming at a target bottle in the live-action image, and determining that the identifier of the target bottle is the first identifier; and determining the identifier associated with the first identifier as the second identifier in the identifiers corresponding to all the entity articles in the live-action image.
In the above embodiment, when the live-action image includes a plurality of bottles, by responding to a selection instruction of a user for a target bottle and determining the first identifier and the second identifier according to the selection instruction, the AR special effects of the plurality of target bottles can be respectively displayed in a real-time image (i.e., the live-action image), so that the special effect display operation of the user can be simplified, and the user experience can be further improved.
In an alternative embodiment, the determining that the identity of the target bottle is the first identity includes: determining a trigger position of the selection instruction in the live-action image; and determining the target bottle body according to the trigger position, identifying the mark corresponding to the target bottle body, and determining the identified mark as the first mark.
In the above embodiment, by determining the identifier of the target bottle as the first identifier according to the trigger position of the selection instruction in the live-action image, the AR special effects of the plurality of bottles can be respectively displayed in one real-time image (i.e., the live-action image), so that the special effect display operation of the user can be simplified, and the user experience can be further improved
In an optional implementation manner, the method is applied to a client application platform, and the client application platform is a Web-side application platform or an applet-side application platform.
In the above embodiment, the AR special effect is displayed through the applet or the wed side, so that the downloading process of the corresponding application APP can be omitted, and the display flow of the AR special effect is simplified.
In a second aspect, an embodiment of the present disclosure further provides a data display apparatus, including: the acquisition unit is used for acquiring a live-action image which is acquired by the augmented reality AR equipment and contains the target bottle body; the detection unit is used for detecting a first mark corresponding to the target bottle body and a second mark corresponding to a solid object except the target bottle body in the live-action image; a determining unit, configured to determine, based on a first AR special effect corresponding to the first identifier and a second AR special effect corresponding to the second identifier, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect; and the display unit is used for displaying the third AR special effect.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a data presentation method provided by an embodiment of the present disclosure;
fig. 2 is a schematic illustration showing a live-action image provided by an embodiment of the disclosure;
fig. 3 is a schematic illustration showing another real-world image provided by the embodiment of the present disclosure;
fig. 4 illustrates a live-action image including a target bottle body acquired by an augmented reality AR device in the data display method provided by the embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating that, in the data presentation method provided by the embodiment of the present disclosure, prompt information for prompting a user to jump to a target browser is displayed on a display interface of an augmented reality AR device;
FIG. 6 is a schematic diagram of a data presentation device provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It has been found through research that in the existing AR technology, a single object can be identified and the virtual content of the single object is triggered to be displayed, and the virtual content of the object is usually a fixed virtual content such as an object introduction of the object. Therefore, the application scenario of the existing AR technology is single, and cannot meet the increasingly rich demands of users.
Based on the research, the present disclosure provides a data display method, in an embodiment of the present disclosure, a first identifier corresponding to a target bottle is detected in a live-action image, and a second identifier corresponding to an entity article except the target bottle is detected; and determining a third AR special effect formed by fusing the first AR special effect corresponding to the first identifier and the second AR special effect corresponding to the second identifier, so that different AR special effects can be triggered and displayed when the target bottle body is combined with other entity articles, triggering conditions of the AR special effects can be enriched through the processing mode, and AR interaction between a user and the bottle body can be realized. When increasing user's interest, can improve the user and experience the use of AR technique, satisfy user's diversified user demand to can also extensively promote the relevant information of bottle.
In order to facilitate understanding of the present embodiment, a data presentation method disclosed in the embodiments of the present disclosure is first described in detail, and an execution subject of the data presentation method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability. In some possible implementations, the data presentation method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a data presentation method provided by the embodiment of the present disclosure is applied to a client application platform, where the client application platform may be an applet application platform or a Web application platform, or may also be a standalone application.
In the technical scheme of the disclosure, based on the client application platform (e.g., an applet application platform or a Web application platform) displaying the third AR special effect obtained by fusing the first AR special effect corresponding to the first identifier and the second AR special effect corresponding to the second identifier, the AR special effect can be displayed without installing a corresponding application APP. For example, an applet application platform is opened in a social platform, a live-action image is shot in the applet, and the third AR special effect is displayed; or opening a Web end application platform in the browser, further shooting a live-action image at the Web end, and displaying the third AR special effect.
In an embodiment of the present disclosure, the data display method includes steps S101 to S107, where:
s101: and acquiring a live-action image which is acquired by the augmented reality AR equipment and contains the target bottle body.
In the disclosed embodiment, steps S101 to S107 may be performed by a computer device.
Here, a live-action image including the target bottle may be acquired by the AR device and displayed on a display interface of the computer device. The computer device for displaying the live-action image and the AR device for collecting the live-action image may be the same device or different devices.
Here, the AR device may be any terminal device supporting AR technology, such as an AR wearable device, an AR handheld device, and the like, and the disclosure is not particularly limited thereto.
The AR-worn device may include the following types of AR devices: a head-mounted AR device, an eye-mounted AR device. The AR handset may include the following types of AR devices: the mobile terminal device comprises a mobile phone supporting AR technology, a tablet personal computer supporting AR technology and other mobile terminal devices.
In the embodiment of the present disclosure, the target bottle may be a container carrying water or containing water, and the material of the target bottle may be any one type of material, for example, a plastic material, a glass material, an iron material, a ceramic material, and the like. For example, the target bottle may be a beverage bottle, any type of ferrous (or glass, or plastic) reservoir bottle. The present disclosure is not limited specifically to the material and form of the target bottle.
S103: and detecting a first identifier corresponding to the target bottle body and a second identifier corresponding to a physical object except the target bottle body in the live-action image.
Here, the live-action image may include a plurality of bottles and/or a plurality of physical objects, and in this case, the target bottle may be determined in the live-action image, the first identifier corresponding to the target bottle may be determined, and the second identifier corresponding to each physical object in the live-action image may be detected.
In the disclosed embodiment, the first mark may be the target bottle itself or an additional mark on the target bottle. The second identifier may be the physical object itself, the physical object may be a two-dimensional planar object or a three-dimensional object, or the second identifier may be an identifier attached to the physical object.
Here, the physical object may be a bottle body, or may be an object other than a bottle body.
In the case that the physical object is a bottle, the physical object may be a bottle with a different number from the target bottle, or a bottle with a different size from the target bottle, or a bottle with a different kind from the target bottle.
Besides, the solid object can also be a combination of objects, and the combination can be a combination of a three-dimensional object and a two-dimensional plane object, a combination of a three-dimensional object and a three-dimensional object, and a combination of a two-dimensional plane object and a two-dimensional plane object. For example, the combination may be a combination of a bottle and a non-bottle three-dimensional article, or a combination of a plurality of non-bottle three-dimensional articles.
In the disclosed embodiment, the two-dimensional planar object may be a card-like object of the type of postcard, admission ticket, flyer, or the like. In addition, the physical object other than the target bottle may be other types of objects, and the present disclosure is not particularly limited thereto.
Fig. 2 is a schematic diagram illustrating an effect of a real-scene image collected by an AR device. As can be seen from fig. 2, the target bottle is placed on a table, and the physical object in the live-action image other than the target bottle is a card (postcard or ticket).
In an alternative embodiment, object detection may be performed on the live-action image, and the position information of the target bottle and the physical object is obtained through detection. Then, the position information image is extracted from the live-action image, and a sub-image B1 containing the target bottle and a sub-image B2 containing the entity article are obtained.
At this time, the first flag may be determined from the sub-image B1 and the second flag may be determined from the sub-image B2. For example, the sub-image B1 may be determined as the first identifier, and the sub-image B2 may be determined as the second identifier. For another example, the sub-image B1 and the sub-image B2 may be subjected to image processing, and the first identifier and the second identifier may be obtained sequentially after the image processing.
Here, the first identifier obtained after the processing may be used to characterize at least one of the following information: the bottle base attribute characteristics of the target bottle (e.g., bottle type, bottle color, bottle size, bottle code), the bottle type of the target bottle (i.e., three-dimensional object). The second identifier obtained after processing may be used to characterize at least one of the following: an object base attribute (type, color, size, code) of the physical object, an object type of the physical object (i.e., a three-dimensional object or a two-dimensional planar object).
As shown in fig. 3, in another alternative embodiment, the identification code set on the target bottle can be detected to obtain the first identifier, and the identification code set on the physical object can be detected to obtain the second identifier. Here, the first identifier and the second identifier may be in the form of identifiers for containing different AR special effect information, and may be in the form of a two-dimensional code, a barcode, or the like, for example.
S105: and determining a third AR special effect obtained by fusing the first AR special effect and the second AR special effect based on the first AR special effect corresponding to the first identification and the second AR special effect corresponding to the second identification.
In the embodiment of the present disclosure, after the first identifier and the second identifier are determined, the AR special effects corresponding to the first identifier and the second identifier may be respectively determined, and the first AR special effect and the second AR special effect are sequentially obtained.
In an embodiment of the disclosure, the first AR effect may be a three-dimensional effect (hereinafter, referred to as a 3D effect), and the second AR effect may be a 2D effect (hereinafter, referred to as a 2D effect). Here, the 3D effect may be a video effect, and the 2D effect may be a sticker effect.
S107: and displaying the third AR special effect.
In the embodiment of the present disclosure, a first identifier corresponding to a target bottle is detected in a live-action image, and a second identifier corresponding to a physical object other than the target bottle is detected; and determining a third AR special effect formed by fusing the first AR special effect corresponding to the first identifier and the second AR special effect corresponding to the second identifier, so that different AR special effects can be triggered and displayed when the target bottle body is combined with other entity articles, triggering conditions of the AR special effects can be enriched through the processing mode, and AR interaction between a user and the bottle body can be realized. When increasing user's interest, can improve the user and experience the use of AR technique, satisfy user's diversified user demand to can also extensively promote the relevant information of bottle.
The processes described in the above-described step S101 to step S107 will be specifically described below.
In an alternative embodiment, step S101, as shown in fig. 4, acquiring a live-action image including a target bottle collected by an augmented reality AR device specifically includes the following processes:
step S1011, responding to a loading request of an H5 page, loading and displaying the H5 page on a display interface of the AR equipment, wherein the H5 page comprises prompt information for prompting a user to jump to a special effect display page;
step S1012, in response to the trigger operation on the prompt information in the H5 page, jumping to the special effect display page, acquiring the live-action image, and displaying the live-action image on the special effect display page
In the embodiment of the present disclosure, the augmented reality AR device starts scanning the target bottle in response to a scan instruction initiated by the user, and initiates a load request of the H5 page in case of scanning the first identifier set on the target bottle. At this time, the augmented reality AR device requests the server to load the H5 page, and displays the H5 page returned by the server on the display interface of the augmented reality AR device, and at this time, as shown in fig. 5, prompt information for prompting the user to jump to the target browser is displayed on the display interface of the augmented reality AR device. The web page of the target browser is a special effect display page used for displaying a corresponding special effect.
After the user clicks the prompt information on the H5 page, it is determined that a trigger operation of the user for the prompt information in the H5 page is detected. At this time, in response to the trigger operation, jumping to a web page (i.e., a special effect display page) of the target browser, and displaying a live-action image containing the target bottle body collected by the augmented reality AR device in the web page.
In the above embodiment, by loading the H5 page and executing the corresponding trigger operation on the prompt information in the H5 page, the method of skipping to the special effect display page to display the live-action image can display the AR special effect of the target bottle without downloading the corresponding client, thereby saving the device memory of the augmented reality AR device, simplifying the user operation, and improving the user experience of the user using the data display method provided by the present disclosure.
In addition to the method for displaying the real-world image described in the above-mentioned S1011 to S1012, the real-world image may be acquired and displayed in the following manner, and the specific process is described as follows:
the AR applet is opened in a client capable of supporting the opening applet installed through the augmented reality AR device. After the small program is opened, entering a special effect display page, and displaying a live-action image which is acquired by the AR equipment and contains the target bottle body in the special effect display page.
After the live view image is acquired, step S103 is executed: detecting a first identifier corresponding to the target bottle body and a second identifier corresponding to an entity object except the target bottle body in the live-action image, and specifically comprising the following processes:
(1) and under the condition that the number of the bottles contained in the live-action image is multiple, responding to a selection instruction aiming at a target bottle in the live-action image, and determining that the mark of the target bottle is the first mark.
(2) And determining the identifier associated with the first identifier as the second identifier in the identifiers corresponding to all the entity articles in the live-action image.
In the embodiment of the present disclosure, if the acquired live-action image includes a plurality of bottles, the user may select a target bottle that needs to be subjected to special-effect display from the plurality of bottles by triggering a selection instruction.
Here, the user can trigger a corresponding selection instruction by clicking the live-action image, and at this time, the AR device can determine the target bottle body to be subjected to special effect display according to the click position of the user on the live-action image. After the target bottle is determined, the first identifier corresponding to the target bottle can be determined.
After the first identifier is determined, identifiers corresponding to all entity articles in the live-action image can be detected, and at least one identifier is obtained. Then, it is determined that the identifier associated with the first identifier is the second identifier in the at least one identifier, and the specific determination process is described as the following process:
the association identifier having an association relation with the first identifier may be looked up in the data table, and then, a target association identifier belonging to the association identifier is determined in at least one identifier corresponding to the entity item, and the target association identifier is determined as the second identifier. Identification information of all identification codes associated with each first identification is preset in the data table.
In this embodiment of the disclosure, if the association relationship between the first identifier and the at least one identifier corresponding to the entity item is not found in the data table, the user may further manually establish the association relationship between the first identifier and the at least one identifier corresponding to the entity item, and store the established association relationship in the data table.
After the first identifier and the second identifier are determined in the above-described manner, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect may be determined based on the first AR special effect corresponding to the first identifier and the second AR special effect corresponding to the second identifier, and the third AR special effect may be displayed.
In addition, after the third AR special effect is displayed, guidance information may be generated in the special effect display page, where the guidance information is used to guide the user to continue to select a target bottle for AR special effect display in the AR device, determine the first identifier and the second identifier according to the above described process, and after the first identifier and the second identifier are determined, determine a third AR special effect obtained by fusing the first AR special effect and the second AR special effect based on the first AR special effect corresponding to the first identifier and the second AR special effect corresponding to the second identifier, and display the third AR special effect. Through the processing mode, the degree of freedom of the user in the third AR special effect watching process can be enriched, and the use experience of the user is improved.
In the above embodiment, when the live-action image includes a plurality of bottles, by responding to a selection instruction of a user for a target bottle and determining the first identifier and the second identifier according to the selection instruction, the AR special effects of the plurality of bottles can be respectively displayed in a real-time image (i.e., the live-action image), so that the special effect display operation of the user can be simplified, and the user experience can be further improved.
In an alternative embodiment, the above steps: determining that the mark of the target bottle body is the first mark, and specifically comprising the following contents:
(1) and determining a trigger position of the selection instruction in the live-action image.
(2) And determining the target bottle body according to the trigger position, identifying the mark corresponding to the target bottle body, and determining the identified mark as the first mark.
In the embodiment of the present disclosure, when the user selects the target bottle body through the selection instruction, the trigger position of the selection instruction in the live-action image may be determined.
Here, after the trigger position is determined, the trigger position and the live-action image are input into a deep learning model to determine the bounding box of the target bottle selected by the user through the deep learning model. Then, the image in the real image located in the surrounding frame is identified, so that the first identifier is identified in the image.
In the embodiment of the disclosure, in the case that a plurality of bottles are included in the live-action image, a trigger gesture that a user triggers to display the AR special effect of each bottle may also be displayed in the special effect display page.
Specifically, a corresponding label may be set for each bottle in the special effect display page, for example: bottle 1, bottle 2 and bottle 3. For each bottle, a corresponding trigger gesture may be set, for example, the trigger gesture corresponding to bottle 1 may be an "OK" gesture, the trigger gesture corresponding to bottle 2 may be a "ye" gesture, and the trigger gesture corresponding to bottle 3 may be a "hearts" gesture. In addition, other triggering gestures may be provided, which the present disclosure is not particularly limited.
After the trigger gesture is displayed, the gesture information of the user can be detected through the camera device of the AR equipment, the target bottle body triggered and displayed by the user is determined according to the gesture information, the mark of the target bottle body is determined, and the determined mark is used as a first mark.
In the embodiment of the present disclosure, when the number of bottles included in the live-action image exceeds the preset number, the bottles located in the foreground portion in the live-action image may be extracted, and a corresponding trigger gesture is added to the bottles located in the foreground portion.
In the embodiment, the identification of the target bottle is determined as the first identification according to the trigger position of the selection instruction in the live-action image, so that the AR special effects of a plurality of bottles can be respectively displayed in one real-time picture (namely, the live-action image), thereby simplifying the special effect display operation of the user and further improving the user experience.
After the first identifier and the second identifier are detected in the above-described manner, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect may be determined based on the first AR special effect corresponding to the first identifier and the second AR special effect corresponding to the second identifier.
In the embodiment of the present disclosure, the detecting the first identifier and the second identifier in the live-action image in the manner described below specifically includes:
and inputting the live-action image into an image segmentation network for processing to obtain the bounding box of each identifier in the live-action image. Then, the image in each bounding box in the live-action image is extracted, and a plurality of sub-images are obtained. And then, carrying out image processing on each sub-image through an image processing network to obtain the identification information of the identification contained in the sub-image.
In an optional embodiment, in step S105, based on a first AR special effect corresponding to the first identifier and a second AR special effect corresponding to the second identifier, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect is determined, which specifically includes the following processes:
s1051, searching a fusion AR special effect matched with the first AR special effect and the second AR special effect in a special effect library, wherein the special effect library comprises special effect information of the fusion AR special effect of the AR special effects corresponding to the multiple identifications.
S1052, determining the found fusion AR special effect as the third AR special effect.
In the embodiment of the present disclosure, after the first identifier and the second identifier are obtained through recognition, a first AR special effect corresponding to the first identifier may be determined, and a second AR special effect corresponding to the second identifier may be determined. Here, the number of the first AR effect may be plural, and the number of the second AR effect may also be plural.
Specifically, the AR special effect matched with the first identifier may be searched in the special effect library, and the determined matched AR special effect may be determined as the first AR special effect. And searching the AR special effect matched with the second identification in the special effect library, and determining the determined matched AR special effect as a second AR special effect.
After the first AR special effect and the second AR special effect are determined, the fused AR special effect may be searched in a special effect library, and a specific search process is described as follows:
first, a special effect label corresponding to each AR special effect in the special effect library is obtained, where the special effect label is used to indicate a type of each AR special effect (e.g., a fusion special effect or a non-fusion special effect). The corresponding special effect label for each fusion special effect further comprises: special effect information of the AR special effect for fusion; and for the non-fusion special effect, identification information (for example, identification information of the first identification or identification information of the second identification) corresponding to the non-fusion special effect.
At this time, the special effect label can search the fusion AR special effect matched with the first AR special effect and the second AR special effect in the special effect library. And after finding the fusion AR special effect, determining a third AR special effect according to the fusion AR special effect.
When one fused AR effect is determined, the fused AR effect may be determined as a third AR effect. When the determined number of the fusion AR special effects is multiple, the fusion AR special effect which accords with the user information can be screened out from the multiple fusion AR special effects by combining the user information and serves as a third AR special effect.
In this disclosure, the special effect library may include an AR special effect corresponding to at least one first identifier, and may further include an AR special effect corresponding to at least one second identifier. For each first identifier, the AR special effect may be a 3D special effect, and the AR special effect corresponding to each second identifier may be a 2D special effect and/or a 3D special effect, so that the fusion AR special effect included in the special effect library may be a fusion special effect between each 3D special effect and each 2D special effect, a fusion special effect between each 3D special effect, and a fusion special effect between each 2D special effect.
The special effect label corresponding to each AR special effect in the special effect library may include a fusion mode between the special effects in addition to the above-described label. For example, in the above-mentioned special effect library, the 3D special effect corresponding to the first identifier is record a1And recording the 2D special effect corresponding to the second identifier as B1. Then, the first AR special effect is A1And 2D Effect B1The fused AR special effect between can be recorded as a1B1At this time, the fusion AR special effect A is pointed to in the special effect library1B1The special effects tag of (a) may contain the following information: 3D Special Effect A1And 2D Effect B1Information of (3D), 3D Special Effect A1And 2D Effect B1The fusion mode of (1).
In the embodiment, the mode of determining the third AR special effect according to the determined fusion special effect of the first AR special effect and the second AR special effect can be used for triggering and displaying different AR special effects when the target bottle body is combined with other non-bottle bodies, the triggering conditions of the AR special effects can be enriched through the processing mode, AR interaction between a user and the target bottle body can be realized, and therefore the interestingness of the user in the using process is increased.
In this embodiment of the present disclosure, determining, based on a first AR special effect corresponding to the first identifier and a second AR special effect corresponding to the second identifier, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect further includes the following steps:
and under the condition that the fusion AR special effect is not found in the special effect library, carrying out special effect fusion on the first AR special effect and the second AR special effect according to a preset fusion mode, and obtaining the third AR special effect after fusion.
In an optional implementation manner, if the fusion AR special effect is not found in the special effect library, performing special effect fusion on the first AR special effect and the second AR special effect according to a preset fusion manner. The preset fusion mode can be set to be associated with the special effect types of the first AR special effect and the second AR special effect.
For example, if the first AR special effect and the second AR special effect are both 3D special effects, the first AR special effect and the second AR special effect may be spliced according to a specified splicing order to obtain a third AR special effect.
For another example, if the first AR special effect is a 3D special effect and the second AR special effect is a 2D special effect, a display position of the second AR special effect may be determined in the first AR special effect, and the second AR special effect is displayed at the display position in the process of displaying the first AR special effect, so that the first AR special effect and the second AR special effect are fused to obtain a third AR special effect.
In the embodiment of the present disclosure, after the first AR special effect and the second AR special effect are subjected to special effect fusion according to a preset fusion manner and the third AR special effect is obtained after the fusion, the third AR special effect may be further added to a special effect library, and a special effect label is added to the third AR special effect. For example, the special effects tag may be: the special effect information of the first AR special effect and the second AR special effect, and a preset fusion mode.
In the embodiment, the first AR special effect and the second AR special effect are subjected to special effect fusion to obtain the third AR special effect under the condition that the fusion AR special effect is not found, so that the special effect content in the special effect library can be enriched, a richer special effect is shown for a user, and the use experience of the user is improved.
Here, the preset fusion method described above may include an embedded fusion method and a splicing fusion method, and the two fusion methods will be described in detail below.
The first method is as follows: embedded fusion mode
In embodiments of the present disclosure, the position of addition of the second AR effect may be determined in the first AR effect. For example, if the first AR effect is a video effect, the position of the addition of the second AR effect may be determined in each video frame of the video effect.
In the embodiment of the present disclosure, the adding position of the second AR special effect may be determined by the following described policy for the first mode:
strategy one:
and taking the area, which is positioned near the second mark, in the first AR special effect as the adding position of the second AR special effect. For example, a circular area or a rectangular area may be set with the center of the second marker as the origin, and the circular area or the rectangular area may be used as the adding position of the second AR special effect.
And (2) strategy two:
a special effect type of the second AR special effect is determined. And if the second AR special effect is determined to be the paster special effect acting on the target bottle body in the live-action image according to the special effect type. The target bottle may be detected in each video frame of the first AR effect to determine the location of the second AR effect added to the target bottle.
Strategy three:
the adding position of the second AR special effect is a fixed adding position, and for example, the adding position may be a lower left corner region or a lower right corner region of the first AR special effect.
The following examples illustrate:
assume that the first AR effect is a 3D effect and the second AR effect is a 2D effect (e.g., a sticker effect).
If the 3D special effect is a publicity video of a manufacturer producing the target bottle, and the 2D special effect is a logo of the manufacturer producing the target bottle, for example, a logo (i.e., a logo type) of the manufacturer, the logo of the manufacturer can be displayed at a fixed position (i.e., an adding position of the second AR special effect) of the publicity video, for example, a lower left corner or a lower right corner, when the publicity video of the manufacturer displaying the target bottle is displayed.
The second method comprises the following steps: splicing type fusion mode
In this embodiment of the present disclosure, the setting of the stitching position, that is, the setting of the display order, may be performed for each 3D special effect corresponding to the first AR special effect. For example, the splicing position of the 3D special effect may be set before or after the display, and if the set splicing position is after the display, when the special effects are merged, the second AR special effect may be added to the first AR special effect to obtain a third AR special effect, that is, when the third AR special effect is displayed, the first AR special effect is displayed first, and then the second AR special effect is displayed.
For example, the display order of the 3D effect and the 2D effect in the third AR effect may be firstly confirmed, for example, the 3D effect may be a promotional video of a manufacturer that produces the target bottle, and the 2D effect may be a logo of the manufacturer that produces the target bottle, for example, a logo of the manufacturer.
In an optional embodiment, in step S105, when determining a third AR special effect obtained by fusing the first AR special effect and the second AR special effect, the following process is further included:
(1) identifying type information of a target object in the live-action image, wherein the target object comprises: the target vial, and/or the physical object.
(2) And determining a third AR special effect after the first AR special effect and the second AR special effect are fused according to the type information.
In an optional implementation manner, after the corresponding first AR special effect and second AR special effect are respectively determined according to the first identifier and the second identifier, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect may also be determined according to type information of a target object in the recognized live-action image.
If the same first markers are arranged on different types of bottles, for example, the same first markers are arranged on both the sport bottle and the child bottle, a third AR special effect with different display effects can be exhibited for different types of bottles. For example, for sports bottles, a sports element may be added to the third AR special effect, and for children bottles, a cartoon element may be added to the third AR special effect.
It is assumed that different types of physical objects are provided with the same second identifier, for example, a postcard or a ticket other than the target bottle. For a postcard, an element corresponding to the pattern contained in the postcard may be added to the third AR special effect; for the ticket, a relevant presentation element of the venue corresponding to the ticket may be added to the third AR special effect.
In the above embodiment, by identifying the type information of the target object in the live-action image and then determining the mode of the third AR special effect after the first AR special effect and the second AR special effect are fused according to the type information, the third AR special effect containing different elements can be triggered and displayed for different types of target bottles and/or different types of entity articles, so that the special effect types of the AR special effects and the triggering mode of the AR special effects are enriched, the interest of the AR technology is improved, and meanwhile, a user can better know the related content of the target bottle, so that the target bottle can be popularized.
In an optional implementation manner, in step S105, the determining a third AR special effect obtained by fusing the first AR special effect and the second AR special effect specifically includes the following processes:
(1) acquiring user information of a user to which the AR equipment belongs; wherein the user information comprises at least one of: the basic attribute of the user, the device type of the AR device, the access times of the user to access the special effect display page and the historical AR special effect pushed for the user.
(2) And determining the fusion special effect type matched with the user information.
(3) And determining a third AR special effect after the first AR special effect and the second AR special effect are fused according to the type of the fused special effect.
In the embodiment of the present disclosure, when the augmented reality AR device is triggered to load the H5 page, user information of a user to which the augmented reality AR device belongs may be acquired.
Here, the user basic attribute may be at least one of: age, gender, hobby, and the like. The device type of the augmented reality AR device may be at least one of: wearable augmented reality AR equipment supports the hand-held type terminal equipment of augmented reality AR function. The access times of the user to access the special effect display page can be understood as follows: the number of times the user visited the special effects presentation page (or loaded the H5 page) over a period of time. The historical AR special effect pushed for the user can be understood as: and pushing the record of the AR special effect for the user of the same user information within a period of time in the past.
After determining the user information, at least one fused special effect type matching the user information may be determined.
For example, different styles of fused special effect types may be determined for a user based on different user base attributes, different access times, historical AR special effects, and different device types.
After determining at least one fused special effect type, determining a third AR special effect after the first AR special effect and the second AR special effect are fused according to the at least one fused special effect type.
In the above embodiment, by determining at least one fusion special effect type matched with the user information and determining a third AR special effect in the at least one fusion special effect type, different users can customize AR special effects of different styles individually, so as to improve the user experience of the users.
In the embodiment of the present disclosure, when determining the fusion special effect type matched with the user information, the method specifically includes the following processes:
(1) and determining the history special effect type of the target history AR special effect which is pushed for the user at the latest time in the history AR special effects under the condition that the history AR special effects are contained in the user information.
(2) And determining the fusion special effect type in other special effect types except the historical special effect type in a plurality of preset special effect types.
In the embodiment of the present disclosure, if the user information includes the historical AR special effect, the target historical AR special effect that is pushed for the user at the latest time is determined according to the historical AR special effect, and then, the type of the historical AR special effect corresponding to the target historical AR special effect can be determined. Then, determining other special effect types except the historical special effect type in a plurality of preset special effect types of the special effect library, and determining the fusion special effect type in the other special effect types.
After the type of the fused special effect is determined, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect can be determined according to the type of the fused special effect.
In the above embodiment, by the processing method, the same AR special effect can be prevented from being continuously pushed to the user, so that the user experience can be improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a data display device corresponding to the data display method is also provided in the embodiments of the present disclosure, and as the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the data display method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 6, a schematic diagram of a data display device provided in an embodiment of the present disclosure is shown, where the data display device includes: the device comprises an acquisition unit 61, a detection unit 62, a determination unit 63 and a display unit 64; wherein the content of the first and second substances,
the acquiring unit 61 is used for acquiring a live-action image which is acquired by the augmented reality AR device and contains the target bottle body;
a detecting unit 62, configured to detect a first identifier corresponding to the target bottle and a second identifier corresponding to an entity object other than the target bottle in the live-action image;
a determining unit 63, configured to determine, based on a first AR special effect corresponding to the first identifier and a second AR special effect corresponding to the second identifier, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect;
and a display unit 64 for displaying the third AR special effect.
In the embodiment of the disclosure, a first identifier corresponding to a target bottle body is detected in a live-action image, and a second identifier corresponding to an entity object except the target bottle body is detected; and determining a third AR special effect formed by fusing the first AR special effect corresponding to the first identifier and the second AR special effect corresponding to the second identifier, so that different AR special effects can be triggered and displayed when the target bottle body is combined with the entity articles except the target bottle body, triggering conditions of the AR special effects can be enriched through the processing mode, and AR interaction between a user and the target bottle body can be realized. When increasing user's interest, can improve the user and experience the use of AR technique, satisfy user's diversified user demand to can also extensively promote the relevant information of target bottle.
In a possible implementation, the determining unit 63 is further configured to: searching a special effect library for a fusion AR special effect matched with the first AR special effect and the second AR special effect, wherein the special effect library comprises special effect information of the fusion AR special effects of the AR special effects corresponding to the multiple identifications; and determining the searched fusion AR special effect as the third AR special effect.
In a possible implementation, the determining unit 63 is further configured to: and under the condition that the fusion AR special effect is not found in the special effect library, carrying out special effect fusion on the first AR special effect and the second AR special effect according to a preset fusion mode, and obtaining the third AR special effect after fusion.
In a possible implementation, the determining unit 63 is further configured to: identifying type information of a target object in the live-action image, wherein the target object comprises: the target vial, and/or the physical object; and determining a third AR special effect after the first AR special effect and the second AR special effect are fused according to the type information.
In a possible implementation, the obtaining unit 61 is further configured to: loading and displaying the H6 page on a display interface of the augmented reality AR device in response to a loading request of an H6 page, wherein the H6 page contains prompt information for prompting a user to jump to a special effect display page; responding to the triggering operation of the prompt message in the H6 page, jumping to the special effect display page, acquiring the live-action image, and displaying the live-action image on the special effect display page.
In a possible implementation, the determining unit 63 is further configured to: acquiring user information of a user to which the AR equipment belongs; wherein the user information comprises at least one of: the basic attribute of the user, the equipment type of the AR equipment, the access times of the user to access the special effect display page and the historical AR special effect pushed for the user; determining a fusion special effect type matched with the user information; and determining a third AR special effect after the first AR special effect and the second AR special effect are fused according to the type of the fused special effect.
In a possible implementation, the determining unit 63 is further configured to: determining a history special effect type of a target history AR special effect which is pushed for the user at the latest time in the history AR special effects under the condition that the user information contains the history AR special effects; determining the fused special effect type in other special effect types except the historical special effect type in a plurality of preset special effect types.
In a possible implementation, the determining unit 63 is further configured to: under the condition that the number of bottles contained in the live-action image is multiple, responding to a selection instruction aiming at a target bottle in the live-action image, and determining that the identifier of the target bottle is the first identifier; and determining the identifier associated with the first identifier as the second identifier in the identifiers corresponding to all the entity articles in the live-action image.
In a possible implementation, the determining module 63 is further configured to: determining a trigger position of the selection instruction in the live-action image; and determining the target bottle body according to the trigger position, identifying the mark corresponding to the target bottle body, and determining the identified mark as the first mark.
In a possible implementation manner, the device is applied to a client application platform, and the client application platform is a Web-side application platform or an applet-side application platform.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Corresponding to the data display method in fig. 1, an embodiment of the present disclosure further provides a computer device 700, as shown in fig. 7, a schematic structural diagram of the computer device 700 provided in the embodiment of the present disclosure includes:
a processor 71, a memory 72, and a bus 73; the memory 72 is used for storing execution instructions and includes a memory 721 and an external memory 722; the memory 721 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 71 and the data exchanged with the external memory 722 such as a hard disk, the processor 71 exchanges data with the external memory 722 through the memory 721, and when the computer device 700 is operated, the processor 71 communicates with the memory 72 through the bus 73, so that the processor 71 executes the following instructions:
acquiring a live-action image which is acquired by augmented reality AR equipment and contains a target bottle body; detecting a first mark corresponding to the target bottle body and a second mark corresponding to a solid object except the target bottle body in the live-action image; determining a third AR special effect obtained after the first AR special effect and the second AR special effect are fused based on the first AR special effect corresponding to the first identification and the second AR special effect corresponding to the second identification; and displaying the third AR special effect.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the data display method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A method for displaying data, comprising:
acquiring a live-action image which is acquired by augmented reality AR equipment and contains a target bottle body;
detecting a first mark corresponding to the target bottle body and a second mark corresponding to a solid object except the target bottle body in the live-action image;
determining a third AR special effect obtained after the first AR special effect and the second AR special effect are fused based on the first AR special effect corresponding to the first identification and the second AR special effect corresponding to the second identification;
and displaying the third AR special effect.
2. The method of claim 1, wherein the determining a third AR effect after the first AR effect and the second AR effect are fused based on a first AR effect corresponding to the first identifier and a second AR effect corresponding to the second identifier comprises:
searching a special effect library for a fusion AR special effect matched with the first AR special effect and the second AR special effect, wherein the special effect library comprises special effect information of the fusion AR special effects of the AR special effects corresponding to the multiple identifications;
and determining the searched fusion AR special effect as the third AR special effect.
3. The method of claim 2, wherein determining a third AR effect after the first AR effect and the second AR effect are fused based on a first AR effect corresponding to the first identifier and a second AR effect corresponding to the second identifier further comprises:
and under the condition that the fusion AR special effect is not found in the special effect library, carrying out special effect fusion on the first AR special effect and the second AR special effect according to a preset fusion mode, and obtaining the third AR special effect after fusion.
4. The method of any of claims 1 to 3, wherein said determining a third AR effect of said first AR effect and said second AR effect after fusion comprises:
identifying type information of a target object in the live-action image, wherein the target object comprises: the target vial, and/or the physical object;
and determining a third AR special effect after the first AR special effect and the second AR special effect are fused according to the type information.
5. The method of any one of claims 1 to 4, wherein the acquiring of the live-action image containing the target bottle collected by the AR device comprises:
loading and displaying the H5 page on a display interface of the AR device in response to a loading request of an H5 page, wherein the H5 page contains prompt information for prompting a user to jump to a special effect display page;
responding to the triggering operation of the prompt message in the H5 page, jumping to the special effect display page, acquiring the live-action image, and displaying the live-action image on the special effect display page.
6. The method of any of claims 1 to 5, wherein said determining a third AR effect of said first AR effect and said second AR effect after fusion comprises:
acquiring user information of a user to which the AR equipment belongs; wherein the user information comprises at least one of: the basic attribute of the user, the equipment type of the AR equipment, the access times of the user to access the special effect display page and the historical AR special effect pushed for the user;
determining a fusion special effect type matched with the user information;
and determining a third AR special effect after the first AR special effect and the second AR special effect are fused according to the type of the fused special effect.
7. The method of claim 6, wherein determining the fused special effect type that matches the user information comprises:
determining a history special effect type of a target history AR special effect which is pushed for the user at the latest time in the history AR special effects under the condition that the user information contains the history AR special effects;
determining the fused special effect type in other special effect types except the historical special effect type in a plurality of preset special effect types.
8. The method of any one of claims 1 to 7, wherein the detecting a first marker corresponding to the target bottle and a second marker corresponding to a physical object other than the target bottle in the live view image comprises:
under the condition that the number of bottles contained in the live-action image is multiple, responding to a selection instruction aiming at a target bottle in the live-action image, and determining that the identifier of the target bottle is the first identifier;
and determining the identifier associated with the first identifier as the second identifier in the identifiers corresponding to all the entity articles in the live-action image.
9. The method of claim 8, wherein the determining the identity of the target vial as the first identity comprises:
determining a trigger position of the selection instruction in the live-action image;
and determining the target bottle body according to the trigger position, identifying the mark corresponding to the target bottle body, and determining the identified mark as the first mark.
10. The method according to any one of claims 1 to 9, wherein the method is applied to a client application platform, and the client application platform is a Web-side application platform or an applet-side application platform.
11. A data presentation device, comprising:
the acquisition unit is used for acquiring a live-action image which is acquired by the augmented reality AR equipment and contains the target bottle body;
the detection unit is used for detecting a first mark corresponding to the target bottle body and a second mark corresponding to a solid object except the target bottle body in the live-action image;
a determining unit, configured to determine, based on a first AR special effect corresponding to the first identifier and a second AR special effect corresponding to the second identifier, a third AR special effect obtained by fusing the first AR special effect and the second AR special effect;
and the display unit is used for displaying the third AR special effect.
12. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the data presentation method of any one of claims 1 to 10.
13. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the data presentation method according to any one of claims 1 to 10.
CN202110620107.6A 2021-06-03 2021-06-03 Data display method and device, computer equipment and storage medium Pending CN113359985A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110620107.6A CN113359985A (en) 2021-06-03 2021-06-03 Data display method and device, computer equipment and storage medium
PCT/CN2021/133452 WO2022252518A1 (en) 2021-06-03 2021-11-26 Data presentation method and apparatus, and computer device, storage medium and computer program product
TW111107480A TW202248961A (en) 2021-06-03 2022-03-02 Data display method computer equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110620107.6A CN113359985A (en) 2021-06-03 2021-06-03 Data display method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113359985A true CN113359985A (en) 2021-09-07

Family

ID=77531830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110620107.6A Pending CN113359985A (en) 2021-06-03 2021-06-03 Data display method and device, computer equipment and storage medium

Country Status (3)

Country Link
CN (1) CN113359985A (en)
TW (1) TW202248961A (en)
WO (1) WO2022252518A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113867528A (en) * 2021-09-27 2021-12-31 北京市商汤科技开发有限公司 Display method, device, equipment and computer readable storage medium
WO2022252518A1 (en) * 2021-06-03 2022-12-08 北京市商汤科技开发有限公司 Data presentation method and apparatus, and computer device, storage medium and computer program product

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100026470A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation Fusing rfid and vision for surface object tracking
CN106774874A (en) * 2016-12-12 2017-05-31 大连文森特软件科技有限公司 Culinary art accessory system based on AR augmented realities and based on color evaluation
US20170206417A1 (en) * 2012-12-27 2017-07-20 Panasonic Intellectual Property Corporation Of America Display method and display apparatus
CN107526443A (en) * 2017-09-29 2017-12-29 北京金山安全软件有限公司 Augmented reality method, device, system, electronic equipment and storage medium
US20180020252A1 (en) * 2015-03-27 2018-01-18 Tencent Technology (Shenzhen) Company Limited Information display method, channel management platform, and terminal
US20180114566A1 (en) * 2015-12-17 2018-04-26 Panasonic Intellectual Property Corporation Of America Display method and display device
CN109582122A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 Augmented reality information providing method, device and electronic equipment
US20190311341A1 (en) * 2018-04-06 2019-10-10 Robert A. Rice Systems and methods for item acquisition by selection of a virtual object placed in a digital environment
US10598936B1 (en) * 2018-04-23 2020-03-24 Facebook Technologies, Llc Multi-mode active pixel sensor
US10719993B1 (en) * 2019-08-03 2020-07-21 VIRNECT inc. Augmented reality system and method with space and object recognition
CN111625100A (en) * 2020-06-03 2020-09-04 浙江商汤科技开发有限公司 Method and device for presenting picture content, computer equipment and storage medium
CN111638792A (en) * 2020-06-04 2020-09-08 浙江商汤科技开发有限公司 AR effect presentation method and device, computer equipment and storage medium
CN111640197A (en) * 2020-06-09 2020-09-08 上海商汤智能科技有限公司 Augmented reality AR special effect control method, device and equipment
CN111880657A (en) * 2020-07-30 2020-11-03 北京市商汤科技开发有限公司 Virtual object control method and device, electronic equipment and storage medium
US20210034870A1 (en) * 2019-08-03 2021-02-04 VIRNECT inc. Augmented reality system capable of manipulating an augmented reality object
CN112562865A (en) * 2021-02-18 2021-03-26 北京声智科技有限公司 Information association method, device, terminal and storage medium
CN112684894A (en) * 2020-12-31 2021-04-20 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
US20210118235A1 (en) * 2019-10-15 2021-04-22 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for presenting augmented reality data, electronic device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140078174A1 (en) * 2012-09-17 2014-03-20 Gravity Jack, Inc. Augmented reality creation and consumption
CN109636888B (en) * 2018-12-05 2023-06-09 网易(杭州)网络有限公司 2D special effect manufacturing method and device, electronic equipment and storage medium
CN111640202B (en) * 2020-06-11 2024-01-09 浙江商汤科技开发有限公司 AR scene special effect generation method and device
CN113359985A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Data display method and device, computer equipment and storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100026470A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation Fusing rfid and vision for surface object tracking
US20170206417A1 (en) * 2012-12-27 2017-07-20 Panasonic Intellectual Property Corporation Of America Display method and display apparatus
US20180020252A1 (en) * 2015-03-27 2018-01-18 Tencent Technology (Shenzhen) Company Limited Information display method, channel management platform, and terminal
US20180114566A1 (en) * 2015-12-17 2018-04-26 Panasonic Intellectual Property Corporation Of America Display method and display device
CN106774874A (en) * 2016-12-12 2017-05-31 大连文森特软件科技有限公司 Culinary art accessory system based on AR augmented realities and based on color evaluation
CN107526443A (en) * 2017-09-29 2017-12-29 北京金山安全软件有限公司 Augmented reality method, device, system, electronic equipment and storage medium
CN109582122A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 Augmented reality information providing method, device and electronic equipment
US20190311341A1 (en) * 2018-04-06 2019-10-10 Robert A. Rice Systems and methods for item acquisition by selection of a virtual object placed in a digital environment
US10598936B1 (en) * 2018-04-23 2020-03-24 Facebook Technologies, Llc Multi-mode active pixel sensor
US10719993B1 (en) * 2019-08-03 2020-07-21 VIRNECT inc. Augmented reality system and method with space and object recognition
US20210034870A1 (en) * 2019-08-03 2021-02-04 VIRNECT inc. Augmented reality system capable of manipulating an augmented reality object
US20210118235A1 (en) * 2019-10-15 2021-04-22 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for presenting augmented reality data, electronic device and storage medium
CN111625100A (en) * 2020-06-03 2020-09-04 浙江商汤科技开发有限公司 Method and device for presenting picture content, computer equipment and storage medium
CN111638792A (en) * 2020-06-04 2020-09-08 浙江商汤科技开发有限公司 AR effect presentation method and device, computer equipment and storage medium
CN111640197A (en) * 2020-06-09 2020-09-08 上海商汤智能科技有限公司 Augmented reality AR special effect control method, device and equipment
CN111880657A (en) * 2020-07-30 2020-11-03 北京市商汤科技开发有限公司 Virtual object control method and device, electronic equipment and storage medium
CN112684894A (en) * 2020-12-31 2021-04-20 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN112562865A (en) * 2021-02-18 2021-03-26 北京声智科技有限公司 Information association method, device, terminal and storage medium

Non-Patent Citations (16)

* Cited by examiner, † Cited by third party
Title
SONGLIN XIE: "Augmented reality three-dimensional display with light field fusion", 《OPTICS EXPRESS》 *
SONGLIN XIE: "Augmented reality three-dimensional display with light field fusion", 《OPTICS EXPRESS》, vol. 24, no. 11, 31 December 2016 (2016-12-31) *
YUJIE WANG: "Multi-Sensor Fusion Tracking Algorithm Based on Augmented Reality System", 《IEEE SENSORS JOURNAL》 *
YUJIE WANG: "Multi-Sensor Fusion Tracking Algorithm Based on Augmented Reality System", 《IEEE SENSORS JOURNAL》, vol. 21, no. 22, 27 October 2020 (2020-10-27), XP011887730, DOI: 10.1109/JSEN.2020.3034139 *
宫铭豪等: "基于容器的融媒体微服务架构安全威胁及防护方法", 《广播电视信息》, no. 05 *
张华等: "基于增强现实技术的交互式包装设计――以太平猴魁AR包装为例", 《湖南工业大学学报(社会科学版)》, no. 03 *
张艺江等: "虚拟群体与动态视频场景的在线实时融合", 《计算机辅助设计与图形学学报》 *
张艺江等: "虚拟群体与动态视频场景的在线实时融合", 《计算机辅助设计与图形学学报》, no. 01, 15 January 2011 (2011-01-15) *
李乾等: "基于无标记识别的增强现实方法研究", ***仿真学报, no. 07 *
沈克等: "基于增强现实的人机物理交互仿真***研究", 计算机仿真, no. 04 *
王智贝等: "基于移动增强现实技术的融媒体***", 工业控制计算机, no. 08 *
肖瑜: "多媒体视觉图像运动轨迹标识仿真研究", 《计算机仿真》 *
肖瑜: "多媒体视觉图像运动轨迹标识仿真研究", 《计算机仿真》, no. 10, 15 October 2018 (2018-10-15) *
陈正捷: "博物馆文创产品设计中基于移动AR的文物展示***研究", 《设计》, no. 01 *
陈际平等: "基于OGRE粒子***及供用户选择虚拟特效的实现", 《陕西师范大学学报(自然科学版)》 *
陈际平等: "基于OGRE粒子***及供用户选择虚拟特效的实现", 《陕西师范大学学报(自然科学版)》, no. 06, 10 November 2011 (2011-11-10) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022252518A1 (en) * 2021-06-03 2022-12-08 北京市商汤科技开发有限公司 Data presentation method and apparatus, and computer device, storage medium and computer program product
CN113867528A (en) * 2021-09-27 2021-12-31 北京市商汤科技开发有限公司 Display method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
TW202248961A (en) 2022-12-16
WO2022252518A1 (en) 2022-12-08

Similar Documents

Publication Publication Date Title
US10186084B2 (en) Image processing to enhance variety of displayable augmented reality objects
CN107683165B (en) Techniques for generating computer models, and devices, systems, and methods utilizing same
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
CN106105185B (en) Indicate method, mobile device and the computer readable storage medium of the profile of user
CN112199524A (en) Multimedia resource matching and displaying method and device, electronic equipment and medium
US20160125252A1 (en) Image recognition apparatus, processing method thereof, and program
CN109816441A (en) Tactful method for pushing, system and relevant apparatus
WO2015191461A1 (en) Recommendations utilizing visual image analysis
US9424689B2 (en) System,method,apparatus and computer readable non-transitory storage medium storing information processing program for providing an augmented reality technique
CN113359985A (en) Data display method and device, computer equipment and storage medium
EP2410493A2 (en) Apparatus and method for providing augmented reality using additional data
CN108697934A (en) Guidance information related with target image
CN108805577B (en) Information processing method, device, system, computer equipment and storage medium
CN111640193A (en) Word processing method, word processing device, computer equipment and storage medium
CN113961794A (en) Book recommendation method and device, computer equipment and storage medium
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN113282687A (en) Data display method and device, computer equipment and storage medium
CN112150349A (en) Image processing method and device, computer equipment and storage medium
US20140241586A1 (en) Information retaining medium and information processing system
CN110021062A (en) A kind of acquisition methods and terminal, storage medium of product feature
CN113360805B (en) Data display method, device, computer equipment and storage medium
US20130100296A1 (en) Media content distribution
WO2019192455A1 (en) Store system, article matching method and apparatus, and electronic device
CN114780181B (en) Resource display method, device, computer equipment and medium
CN114049467A (en) Display method, display device, display apparatus, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051370

Country of ref document: HK