CN111638796A - Virtual object display method and device, computer equipment and storage medium - Google Patents

Virtual object display method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111638796A
CN111638796A CN202010508202.2A CN202010508202A CN111638796A CN 111638796 A CN111638796 A CN 111638796A CN 202010508202 A CN202010508202 A CN 202010508202A CN 111638796 A CN111638796 A CN 111638796A
Authority
CN
China
Prior art keywords
scene
information
virtual object
determining
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010508202.2A
Other languages
Chinese (zh)
Inventor
潘思霁
揭志伟
刘小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202010508202.2A priority Critical patent/CN111638796A/en
Publication of CN111638796A publication Critical patent/CN111638796A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/14Travel agencies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F9/00Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a method, an apparatus, a computer device and a storage medium for displaying a virtual object, wherein the method comprises: acquiring scene information of a current real scene of an augmented reality AR device; under the condition that the scene information is determined to meet the condition of a target preset scene, determining virtual object information matched with the target preset scene; determining a virtual screen corresponding to the virtual object information based on the virtual object information; and displaying the AR effect of the combination of the real scene picture and the virtual picture in the AR equipment.

Description

Virtual object display method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a method and an apparatus for displaying a virtual object, a computer device, and a storage medium.
Background
In recent years, with the rapid development of the cultural tourism industry, more and more user groups visit various exhibitions, museums, scenic spots and the like. At present, most of the users visit areas such as exhibitions, museums or scenic spots, on one hand, for some exhibitions which are relatively careless, the users are difficult to deeply know the contents of the exhibitions, and on the other hand, the current exhibition mode is lack of interaction with the users.
Disclosure of Invention
The embodiment of the disclosure at least provides a display method and device of a virtual object, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a method for displaying a virtual object, including:
acquiring scene information of a current real scene of an augmented reality AR device;
under the condition that the scene information is determined to meet the condition of a target preset scene, determining virtual object information matched with the target preset scene;
determining a virtual screen corresponding to the virtual object information based on the virtual object information;
and displaying the AR effect of the combination of the real scene picture and the virtual picture in the AR equipment.
In the embodiment of the disclosure, when it is recognized that the scene information of the real scene where the AR device is currently located meets the condition of the target preset scene, an AR effect in which the real scene picture and the virtual picture are combined may be presented in the AR device based on the virtual picture corresponding to the virtual object information matched with the target preset scene. Above-mentioned scheme is when being applied to culture tourism trade, can combine the virtual picture of the virtual object that current scene information matches on the basis of the real content that exhibition project exhibition, demonstrate the AR special effect relevant with the exhibition project, on the one hand, the show form of exhibition project can be richened, on the other hand, when the handheld AR equipment of user passes through the exhibition project, through presenting the AR special effect relevant with the exhibition project on the AR equipment, also can strengthen the interaction with the user, deepen the impression of user to the content that the exhibition project demonstrates, user's visual experience sense has further been promoted.
In some embodiments of the present disclosure, the acquiring scene information of a real scene in which the AR device is currently located includes:
acquiring the current geographical position information of the AR equipment, and taking the geographical position information as the scene information;
the determining that the scene information meets the condition of a target preset scene includes:
matching the geographical position information with a geographical position range corresponding to at least one preset scene in a preset scene library respectively;
and under the condition that the geographic position information is determined to be in the geographic position range corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene.
In the embodiment, the virtual picture of the virtual object related to the real scene can be triggered and displayed based on the geographical position information of the real scene where the AR equipment is located, so that when the AR equipment is held by a user and reaches the geographical position range corresponding to the target preset scene, the AR effect of combining the virtual picture and the real scene picture can be presented in the AR equipment, the display form of the exhibition project is enriched, the interaction between the AR equipment and the user can be enhanced, the impression of the user on the exhibition project is deepened, and the visual experience of the user is further improved.
In some embodiments of the present disclosure, the acquiring scene information of a real scene in which the AR device is currently located includes:
acquiring a real scene picture acquired by the AR equipment, performing attribute identification on an image area where an entity object is located in the real scene picture, and determining the attribute of the identified entity object as the scene information;
the determining that the scene information meets the condition of a target preset scene includes:
matching the entity object attribute with an attribute tag corresponding to at least one preset scene in a preset scene library respectively;
and under the condition that the entity object attribute is determined to be consistent with any attribute tag of the target preset scene, determining that the scene information meets the condition of the target preset scene.
In the embodiment, the virtual picture of the virtual object related to the real scene can be triggered and displayed based on the attribute of the entity object in the real scene picture collected by the AR equipment, so that when the user collects the real scene picture corresponding to any attribute label of the target preset scene by holding the AR equipment, the AR effect of combining the virtual picture and the real scene picture can be presented in the AR equipment, the display form of the exhibition project is enriched, the interaction between the AR equipment and the user can be enhanced, the impression of the user on the exhibition project is deepened, and the visual experience of the user is further improved.
In some embodiments of the present disclosure, the acquiring scene information of a real scene in which the AR device is currently located includes:
acquiring a real scene picture acquired by the AR equipment;
recognizing a character area in the real scene picture, and determining the recognized character content as the scene information;
the determining that the scene information meets the condition of a target preset scene includes:
matching the recognized text content with text content respectively corresponding to at least one preset scene in a preset scene library;
and under the condition that the recognized text content is determined to be consistent with the text content corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene.
In this embodiment, can be based on the literal content in the real scene picture that the AR equipment gathered, trigger the virtual picture that demonstrates the virtual object relevant with this real scene, like this, when the handheld AR equipment of user gathers and presets the real scene picture that the literal content that the scene was preset to the target corresponds, can present the AR effect that virtual picture and real scene picture combined together in the AR equipment, the show form of exhibition project has not only been enriched, also can strengthen and the interaction between the user, deepen the impression of user to the exhibition project, user's visual experience sense has further been promoted.
In some embodiments of the present disclosure, the determining virtual object information matching the target preset scene includes:
and searching virtual object information corresponding to the scene identification of the target preset scene from a preset virtual object library, wherein various kinds of virtual object information and scene identifications corresponding to various kinds of virtual object information are recorded in the virtual object library.
In some embodiments of the present disclosure, the target preset scene and the matched virtual object information belong to the same specific topic.
In this embodiment, the virtual object information and the corresponding scene identifier may be set individually according to the specific exhibition requirements of the exhibition project, so that the virtual object information corresponding to the scene identifier of the target preset scene may be directly obtained when the current scene information is obtained and the scene information satisfies the target preset scene.
In a second aspect, an embodiment of the present disclosure further provides an apparatus for displaying a virtual object, including:
the acquiring module is used for acquiring scene information of a current real scene of the AR equipment;
the first determining module is used for determining virtual object information matched with a target preset scene under the condition that the scene information is determined to meet the condition of the target preset scene;
a second determination module, configured to determine, based on the virtual object information, a virtual picture corresponding to the virtual object information;
and the display module is used for displaying the AR effect of the combination of the real scene picture and the virtual picture in the AR equipment.
In some embodiments of the present disclosure, when acquiring scene information of a real scene where an AR device is currently located, the acquiring module is specifically configured to:
acquiring the current geographical position information of the AR equipment, and taking the geographical position information as the scene information;
the first determining module, when determining that the scene information satisfies a condition of a target preset scene, is specifically configured to:
matching the geographical position information with a geographical position range corresponding to at least one preset scene in a preset scene library respectively;
and under the condition that the geographic position information is determined to be in the geographic position range corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene.
In some embodiments of the present disclosure, when acquiring scene information of a real scene where an AR device is currently located, the acquiring module is specifically configured to:
acquiring a real scene picture acquired by the AR equipment, performing attribute identification on an image area where an entity object is located in the real scene picture, and determining the attribute of the identified entity object as the scene information;
the first determining module, when determining that the scene information satisfies a condition of a target preset scene, is specifically configured to:
matching the entity object attribute with an attribute tag corresponding to at least one preset scene in a preset scene library respectively;
and under the condition that the entity object attribute is determined to be consistent with any attribute tag of the target preset scene, determining that the scene information meets the condition of the target preset scene.
In some embodiments of the present disclosure, when acquiring scene information of a real scene where an AR device is currently located, the acquiring module is specifically configured to:
acquiring a real scene picture acquired by the AR equipment;
recognizing a character area in the real scene picture, and determining the recognized character content as the scene information;
the first determining module, when determining that the scene information satisfies a condition of a target preset scene, is specifically configured to:
matching the recognized text content with text content respectively corresponding to at least one preset scene in a preset scene library;
and under the condition that the recognized text content is determined to be consistent with the text content corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene.
In some embodiments of the present disclosure, when determining the virtual object information matched with the target preset scene, the second determining module is specifically configured to:
and searching virtual object information corresponding to the scene identification of the target preset scene from a preset virtual object library, wherein various kinds of virtual object information and scene identifications corresponding to various kinds of virtual object information are recorded in the virtual object library.
In some embodiments of the present disclosure, the target preset scene and the matched virtual object information belong to the same specific topic.
In a third aspect, this disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
According to the method, the device, the computer device and the storage medium provided by the embodiment of the disclosure, the scene information of the current real scene of the AR device can be obtained, the virtual object information matched with the target preset scene is obtained under the condition that the scene information of the real scene meets the condition of the target preset scene is identified, and further, the AR effect of combining the real scene picture and the virtual picture can be presented in the AR device based on the virtual picture corresponding to the virtual object information. Above-mentioned scheme, when being applied to cultural tourism trade, can be based on the scene information of exhibition project, combine the virtual picture of the virtual object that current scene information matches on the basis of the real content of exhibition, demonstrate the AR special effect relevant with the exhibition project, on the one hand, the show form of exhibition project can be richened, on the other hand, when the handheld AR equipment of user passes through the exhibition project, through present the AR special effect relevant with the exhibition project on the AR equipment, also can strengthen and the interaction between the user, deepen the impression of user to the show content of exhibition project, user's visual experience has further been promoted.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a flowchart illustrating a method for displaying a virtual object according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a first example of a method for presenting a virtual object according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a second example of a method for displaying a virtual object according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a third example of a virtual object presentation method provided by the embodiment of the present disclosure
FIG. 5 is a schematic diagram illustrating a display apparatus for a virtual object provided by an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Augmented Reality (AR) technology superimposes entity information (visual information, sound, touch, etc.) on the real world after simulation, so that a real environment and a virtual object are presented on the same screen or space in real time.
The embodiment of the present disclosure may be applied to any computer device (such as a mobile phone, a tablet, AR glasses, etc.) or a server supporting AR technology, or a combination thereof, and in a case that the present disclosure is applied to a server, the server may be connected to other computer devices having a communication function and a camera, where the connection mode may be a wired connection or a Wireless connection, and the Wireless connection may be, for example, a bluetooth connection, a Wireless broadband (WIFI) connection, etc.
The following describes a method for displaying a virtual object according to an embodiment of the present disclosure in detail.
Referring to fig. 1, a schematic flow chart of a method for displaying a virtual object according to an embodiment of the present disclosure includes the following steps:
s101, scene information of a current real scene of the AR device is obtained.
The scene information is used for describing the characteristics of the current real scene of the AR device. For example, the scene information may be represented by geolocation information of the real scene where the AR device is currently located to identify the location of the current real scene. Alternatively, the scene information may also be represented by an attribute of an entity object that actually exists in the real scene, for example, the attribute of the entity object is a person, or the attribute of the entity object is a plant, or the attribute of the entity object is a building, and the disclosure is not limited thereto. Alternatively, the scene information may be represented by specific text content given in a real scene, for example, the real scene is an exhibition hall, names of the exhibition hall or descriptions of the exhibition areas in each exhibition hall of the exhibition hall may be identified by text, and then the identified names of the exhibition hall or descriptions of the exhibition areas may be used as the scene information. Of course, in practical applications, the scene information may also be represented by other symbols or signs based on the exhibition requirement of a specific exhibition project, and the present disclosure is not limited thereto.
In the embodiment of the disclosure, the display method of the virtual object can be applied to an AR device or a server. When the display method is applied to the AR equipment, the scene information of the real scene can be identified by using the detection capability of the AR equipment, or the data related to the real scene is sent to the server, and the server further identifies the scene information based on the related data. When the display method is applied to the server, the AR device or other electronic devices may send the scene information of the detected real scene to the server. The specific manner of acquiring the scene information is not limited in the present disclosure.
S102, under the condition that the scene information meets the condition of the target preset scene, virtual object information matched with the target preset scene is determined.
In the embodiment of the present disclosure, the provided scene information has various forms, and accordingly, the condition of the target preset scene may be preset based on the specific form of the scene information. The specific form of the selected scene information and the manner of determining whether the scene information meets the condition of the target preset scene will be exemplarily described in the following embodiments, and the description thereof will not be provided here. The processing procedure of determining whether the scene information meets the condition of the target preset scene may be completed in the AR device or the server, which is not limited in this disclosure.
In some embodiments of the present disclosure, when determining the virtual object information matched with the target preset scene, the virtual object information corresponding to the scene identifier of the target preset scene may be searched from a preset virtual object library. The virtual object library may record various kinds of virtual object information and scene identifiers corresponding to the various kinds of virtual object information.
In specific implementation, the virtual object information and the corresponding scene identifier can be set in a personalized manner according to specific exhibition requirements of exhibition items, so that the virtual object information corresponding to the scene identifier of the target preset scene can be directly acquired under the condition that the current scene information is acquired and the scene information meets the target preset scene.
For example, assuming that the exhibition item is a down exhibition hall, the scene identifier of the target preset scene may be set as a down direction, the corresponding virtual object information may be set as portrait information of a historical figure related to the down direction, and the like.
In some embodiments, the target preset scene and the matched virtual object information may belong to the same specific topic. In specific implementation, the corresponding relationship between the scene identifier of the target preset scene and the virtual object information may be configured in combination with a specific theme. For example, assuming that a specific subject of the exhibition project is a subject made around an exhibit of a certain historical dynasty, the scene identifier of the target preset scene may be set as the identifier of the historical dynasty, and the related information of the person related to the historical dynasty is set as the virtual object information, and a corresponding relationship between the identifier of the historical dynasty and the information of the person related to the historical dynasty is established, for example, a corresponding relationship between "tang dynasty" and "li-white portrait".
In this embodiment of the disclosure, when the display method is applied to the AR device, if the preset virtual object library is stored locally in the AR device, the corresponding virtual object information may be directly obtained locally, and if the preset virtual object library is stored in the cloud server, the corresponding virtual object information may be obtained from the server. When the display method is applied to the server, the server can directly obtain the corresponding virtual object information based on the virtual object library stored in the cloud.
S103, based on the virtual object information, a virtual screen corresponding to the virtual object information is specified.
In the embodiment of the present disclosure, the virtual object information may be an animation video of a virtual object rendered by a rendering tool, may also be a rendering parameter required for generating the animation video of the virtual object, and may also be a two-dimensional or three-dimensional model parameter of the virtual object in multiple postures, and the animation video of the virtual object in different postures may be rendered by using the two-dimensional or three-dimensional model parameter. For example, the two-dimensional or three-dimensional model parameters of the virtual object in the plurality of poses may include facial key point parameters and limb key point parameters of the virtual object, and the like. The face and limb key point parameters include, for example, coordinate values of key points, depth values, and the like. Illustratively, the virtual objects include, but are not limited to, any one or combination of characters, animals, plants, buildings, and the like. The form of the virtual object is not particularly limited by the present disclosure.
When the acquired virtual object information is an animation video of a rendered virtual object, the determined virtual picture corresponding to the virtual object information may be a multi-frame video picture in the animation video of the virtual object. And under the condition that the obtained virtual object information is a rendering parameter or a two-dimensional or three-dimensional model parameter of the virtual object in multiple postures, performing image rendering processing on the rendering parameter or the model parameter by using a rendering tool to generate a multi-frame virtual picture of the virtual object, wherein the multi-frame virtual picture is an animation video of the virtual object.
The content of the virtual screen presented by the virtual object information is not limited in the embodiment of the present disclosure. Illustratively, animation effects of different postures of the virtual object can be presented, and a picture and text display effect containing the virtual object can also be presented. The virtual object may be a two-dimensional virtual object or a three-dimensional virtual object. The virtual object may be a character that conforms to a particular theme, such as a character in a historical era, or a character in a mythical story, etc. In practical applications, the virtual object information may be configured in combination with a specific theme of the exhibition content of the actual exhibition item, which is not limited by the present disclosure.
When the presentation method is applied to the AR device, the generation operation of the virtual picture of the virtual object information can be completed locally, and the virtual picture corresponding to the generated virtual object information can be directly acquired from the cloud server. When the presentation method is applied to a server, the server may directly generate a virtual image corresponding to the virtual object information or search for the virtual image corresponding to the virtual object information from a local or other network device.
And S104, displaying the AR effect of the combination of the real scene picture and the virtual picture in the AR equipment.
For example, the AR effect is presented in the AR device, and may be understood as displaying a virtual picture merged into the real scene in the AR device, the virtual picture may be directly rendered and merged with the real scene, for example, the virtual picture of "goddess Chang flying to the moon" is presented, and the display effect is displayed in a set three-dimensional space in the real scene, or the merged display picture may be displayed after the virtual picture is merged with the real scene picture. The specific selection of which presentation manner depends on the device type of the AR device and the adopted picture presentation technology, for example, generally, since a real scene (not an imaged real scene picture) can be directly seen from the AR glasses, the AR glasses can adopt a presentation manner of directly rendering a virtual picture; for mobile terminal devices such as mobile phones and tablet computers, the display in the mobile terminal device is the real scene picture after imaging the real scene, so the AR effect can be displayed by fusing the display content of the real scene picture and the virtual picture.
In the embodiment of the disclosure, when it is recognized that the scene information of the real scene where the AR device is currently located meets the condition of the target preset scene, an AR effect in which the real scene picture and the virtual picture are combined may be presented in the AR device based on the virtual picture corresponding to the virtual object information matched with the target preset scene. When being applied to cultural tourism trade, can combine the virtual picture of the virtual object that current scene information matches on the basis of the real content that exhibition project exhibition, demonstrate the AR special effect relevant with the exhibition project, on the one hand, the show form of exhibition project can be richened, on the other hand, when the handheld AR equipment of user passes through the exhibition project, through presenting the AR special effect relevant with the exhibition project on the AR equipment, also can strengthen the interaction with the user, deepen the impression of user to the content that the exhibition project demonstrates, user's visual experience sense has further been promoted.
In the following, the display method of the virtual object provided by the embodiment of the present disclosure is respectively exemplarily described in combination with a specific form of the scene information.
Referring to fig. 2, a flowchart illustrating a first example of a method for displaying a virtual object according to an embodiment of the present disclosure includes the following steps:
s201, obtaining the current geographic position information of the AR device, and taking the geographic position information as the scene information of the current real scene of the AR device.
For example, the current geographic location information of the AR device may be obtained by detecting through a positioning module built in the AR device, and when the execution subject of the presentation method is a server, the server may receive the current geographic location information sent by the AR device.
The specific form of the geographical location information may be various, and the longitude and latitude coordinates detected by the positioning module may be used as the geographical location information, or the Point of Interest (POI) where the AR device is currently located in the map may be identified by the positioning module and used as the geographical location information.
S202, matching the geographic position information with a geographic position range corresponding to at least one preset scene in a preset scene library, and determining that the scene information meets the condition of the target preset scene under the condition that the geographic position information is determined to be in the geographic position range corresponding to the target preset scene.
For example, a plurality of preset scenes may be preset in the preset scene library, and each preset scene is bound to a corresponding geographic position range. The corresponding geographic location range may be represented by a set latitude and longitude coordinate range, or may be represented by a plurality of set points of interest (POIs) included in a set area.
When detecting that the longitude and latitude coordinates of the AR device at present are within the set longitude and latitude coordinate range corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene. Alternatively, when it is detected that the interest point at which the AR device is currently located is included in a plurality of set interest points within the set area, it may be determined that the scene information satisfies the condition of the target preset scene.
S203, determining the virtual object information matched with the target preset scene.
And S204, determining a virtual picture corresponding to the virtual object information based on the virtual object information.
S205, displaying the AR effect of the combination of the real scene picture and the virtual picture in the AR equipment.
In the embodiment, the virtual picture of the virtual object related to the real scene can be triggered and displayed based on the geographical position information of the real scene where the AR equipment is located, so that when the AR equipment is held by a user and reaches the geographical position range corresponding to the target preset scene, the AR effect of combining the virtual picture and the real scene picture can be presented in the AR equipment, the display form of the exhibition project is enriched, the interaction between the AR equipment and the user can be enhanced, the impression of the user on the exhibition project is deepened, and the visual experience of the user is further improved.
Referring to fig. 3, a flowchart illustrating a second example of a method for displaying a virtual object according to an embodiment of the present disclosure includes the following steps:
s301, acquiring a real scene picture acquired by the AR equipment, performing attribute identification on an image area where an entity object is located in the real scene picture, and determining the attribute of the identified entity object as scene information.
In the embodiment of the present disclosure, when the display method is applied to an AR device, an image capture device (such as a camera) in the AR device may be used to capture a real scene picture in a real scene, and the real scene picture of a single frame may be captured by capturing an image, or the real scene picture of consecutive multiple frames may be captured by capturing a video. When the display method is applied to the server, the AR device or other computer devices with image acquisition functions can send acquired real scene pictures of a single frame or continuous multiple frames to the server. The present disclosure is not limited to the specific manner of image acquisition and the number of frames of the acquired image.
The real scene picture in the embodiment of the present disclosure refers to an image of a real scene captured by an AR device or other computer device. The real scene picture can include at least one entity object in the real scene. For example, for a real scene picture in an exhibition hall, the entity object included in the real scene picture may be at least one exhibit in the exhibition hall, and the like.
Illustratively, an image area where an entity object in a real scene picture is located is determined by using a pre-trained target detection model and the real scene picture, and then attribute recognition is performed on the image area where the entity object is located. The target detection model can be a neural network model, the target detection model can be trained by using the image sample marked with the image area where the entity object is located, and the trained target detection model can accurately identify the image area where the entity object is located in the real scene.
Further, the attribute of the image area where the entity object is located is identified, and the attribute of the image area where the entity object is located can be identified by using a pre-trained attribute detection model, so that an attribute identification result of the entity object is obtained. The attribute detection model can be trained by using image samples labeled with the attributes of the entity objects in advance, and the trained attribute detection model can be used for accurately predicting the attributes of the entity objects.
For example, entity object attributes may also be understood as entity object categories. For example, a physical object is a human statue, and the sex, age, historical era of the human statue, and the like of the human statue can be regarded as any one or more physical object attributes of the physical object.
S302, matching the entity object attribute with an attribute tag corresponding to at least one preset scene in a preset scene library; and under the condition that the attribute of the entity object is determined to be consistent with any attribute label of the target preset scene, determining that the scene information meets the condition of the target preset scene.
For example, a corresponding relationship between the preset scenes and the attribute tags may be pre-established, specifically, each preset scene may correspond to at least one attribute tag, or at least one attribute tag may correspond to one preset scene.
For example, the attribute tag may be used to represent an entity object attribute in the target preset scene. One or more attribute tags can be configured in the target preset scene, and when the identified attribute tag of the entity object in the current real scene is the same as any attribute tag in the target preset scene or has a higher similarity, the scene information can be regarded as meeting the condition of the target preset scene.
And S303, determining virtual object information matched with the target preset scene.
And S304, based on the virtual object information, determining the virtual picture corresponding to the virtual object information.
S305, displaying the AR effect of the combination of the real scene picture and the virtual picture in the AR device.
In the embodiment, the virtual picture of the virtual object related to the real scene can be triggered and displayed based on the attribute of the entity object in the real scene picture collected by the AR equipment, so that when the user collects the real scene picture corresponding to any attribute label of the target preset scene by holding the AR equipment, the AR effect of combining the virtual picture and the real scene picture can be presented in the AR equipment, the display form of the exhibition project is enriched, the interaction between the AR equipment and the user can be enhanced, the impression of the user on the exhibition project is deepened, and the visual experience of the user is further improved.
Referring to fig. 4, a flowchart illustrating a third example of a method for displaying a virtual object according to an embodiment of the present disclosure includes the following steps:
s401, acquiring a real scene picture acquired by AR equipment, identifying a character area in the real scene picture, and determining the identified character content as scene information.
For example, the text region in the real scene picture can be recognized by means of Optical Character Recognition (OCR). For example, if a real scene is a certain exhibition hall, then the word introductions for various exhibition items in the exhibition hall can be shown on the wall of the exhibition hall, and then the obtained word introductions can be used as scene information by recognizing the word introductions in the real scene picture.
S402, matching the recognized text content with the text content corresponding to at least one preset scene in the preset scene library, and determining that the scene information meets the condition of the target preset scene under the condition that the recognized text content is determined to be consistent with the text content corresponding to the target preset scene.
For example, a corresponding relationship between the preset scenes and the text contents may be established in advance, specifically, each preset scene may correspond to at least one text content, or at least one text content may correspond to one preset scene.
The text content may be represented in the form of a keyword, a keyword sentence, or the like. One or more kinds of text contents can be configured in the target preset scene, and under the condition that the recognized text contents appearing in the current real scene are the same as any text contents in the target preset scene or have higher similarity, the scene information can be regarded as meeting the condition of the target preset scene.
403. And determining virtual object information matched with the target preset scene.
S404, based on the virtual object information, a virtual screen corresponding to the virtual object information is specified.
S405, displaying the AR effect of the combination of the real scene picture and the virtual picture in the AR equipment.
In this embodiment, can be based on the literal content in the real scene picture that the AR equipment gathered, trigger the virtual picture that demonstrates the virtual object relevant with this real scene, like this, when the handheld AR equipment of user gathers and presets the real scene picture that the literal content that the scene was preset to the target corresponds, can present the AR effect that virtual picture and real scene picture combined together in the AR equipment, the show form of exhibition project has not only been enriched, also can strengthen and the interaction between the user, deepen the impression of user to the exhibition project, user's visual experience sense has further been promoted.
In the first to third examples, the features related to the previous embodiments may be referred to the explanation of the related features in the previous embodiments, and the description is not repeated in the present disclosure.
The following is an illustration of a specific application scenario of the disclosed embodiments.
Firstly, a preset scene library can be established at the cloud or locally, and the preset scene library records the mapping relationship between the scene identifier of the preset scene and the virtual object information (such as the situation that a womb corresponds to a Beijing opera, and terracotta soldiers correspond to Qinshihuang, etc.). The scene information represented by the scene identifier can be any one or combination of images, characters, geographical position information, external characteristics of the venue and the like.
Thereafter, the scene is scanned. And scanning the real scene needing the AR effect by using mobile portable equipment such as a mobile phone with a camera, and sending video frame data captured by the camera to the cloud. After receiving the video frame data, the server matches with the preset scene library established before, and simultaneously returns the scene identification of the preset scene after the matching is successful to the client.
Further, after the client receives the scene identifier of the preset scene, the client may download or locally read the virtual object information corresponding to the scene identifier from the cloud, and display the AR effect superimposed with the corresponding virtual picture by using the virtual object information.
For example, if four characters "goddess Chang flying to the moon" are recognized, the AR effect of "goddess flying to the moon" is displayed in a superimposed manner in the camera preview field of view of a mobile portable device such as a mobile phone.
Real scenes (such as buildings, historical backgrounds and mythical backgrounds in Tang dynasty exhibition halls and the old palace) are recognized through image recognition, character recognition, equipment positioning and other means, and further AR special effects related to the real scenes are determined and displayed. The display device breaks through the limitation of traditional two-dimensional and closed three-dimensional display, achieves the effect of fusion and superposition with a real scene, and greatly improves the user experience.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, a virtual object display device corresponding to the virtual object display method is further provided in the embodiment of the present disclosure, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the virtual object display method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are omitted.
Referring to fig. 5, a schematic diagram of an apparatus for displaying a virtual object according to an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 51, a first determination module 52, a second determination module 53 and a presentation module 54.
An obtaining module 51, configured to obtain scene information of a current real scene where the augmented reality AR device is located;
a first determining module 52, configured to determine, in a case that it is determined that the scene information meets a condition of a target preset scene, virtual object information matching the target preset scene;
a second determining module 53, configured to determine, based on the virtual object information, a virtual screen corresponding to the virtual object information;
and a display module 54, configured to display, in the AR device, an AR effect of combining the real scene picture with the virtual picture.
In some embodiments of the present disclosure, when acquiring scene information of a real scene where an AR device is currently located, the acquiring module 51 is specifically configured to:
acquiring the current geographical position information of the AR equipment, and taking the geographical position information as the scene information;
the first determining module 52, when determining that the scene information satisfies the condition of the target preset scene, is specifically configured to:
matching the geographical position information with a geographical position range corresponding to at least one preset scene in a preset scene library respectively;
and under the condition that the geographic position information is determined to be in the geographic position range corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene.
In some embodiments of the present disclosure, when acquiring scene information of a real scene where an AR device is currently located, the acquiring module 51 is specifically configured to:
acquiring a real scene picture acquired by the AR equipment, performing attribute identification on an image area where an entity object is located in the real scene picture, and determining the attribute of the identified entity object as the scene information;
the first determining module 52, when determining that the scene information satisfies the condition of the target preset scene, is specifically configured to:
matching the entity object attribute with an attribute tag corresponding to at least one preset scene in a preset scene library respectively;
and under the condition that the entity object attribute is determined to be consistent with any attribute tag of the target preset scene, determining that the scene information meets the condition of the target preset scene.
In some embodiments of the present disclosure, when acquiring scene information of a real scene where an AR device is currently located, the acquiring module 51 is specifically configured to:
acquiring a real scene picture acquired by the AR equipment;
recognizing a character area in the real scene picture, and determining the recognized character content as the scene information;
the first determining module 52, when determining that the scene information satisfies the condition of the target preset scene, is specifically configured to:
matching the recognized text content with text content respectively corresponding to at least one preset scene in a preset scene library;
and under the condition that the recognized text content is determined to be consistent with the text content corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene.
In some embodiments of the present disclosure, when determining the virtual object information matched with the target preset scene, the second determining module 53 is specifically configured to:
and searching virtual object information corresponding to the scene identification of the target preset scene from a preset virtual object library, wherein various kinds of virtual object information and scene identifications corresponding to various kinds of virtual object information are recorded in the virtual object library.
In some embodiments of the present disclosure, the target preset scene and the matched virtual object information belong to the same specific topic.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 6, a schematic structural diagram of a computer device provided in an embodiment of the present disclosure includes: a processor 11 and a memory 12; the memory 12 stores machine-readable instructions executable by the processor 11, which when executed by the computer device are executed by the processor 11 to perform the steps of:
acquiring scene information of a current real scene of an augmented reality AR device; under the condition that the scene information is determined to meet the condition of a target preset scene, determining virtual object information matched with the target preset scene; determining a virtual screen corresponding to the virtual object information based on the virtual object information; and displaying the AR effect of the combination of the real scene picture and the virtual picture in the AR equipment.
The specific execution process of the instruction may refer to the steps of the method for displaying a virtual object described in the embodiments of the present disclosure, and details are not described here.
In addition, the present disclosure also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method for presenting a virtual object described in the above method embodiments are performed.
The computer program product of the method for displaying a virtual object provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the method for presenting augmented reality data described in the above method embodiments, which may be referred to in the above method embodiments specifically, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method for displaying a virtual object, comprising:
acquiring scene information of a current real scene of an augmented reality AR device;
under the condition that the scene information is determined to meet the condition of a target preset scene, determining virtual object information matched with the target preset scene;
determining a virtual screen corresponding to the virtual object information based on the virtual object information;
and displaying the AR effect of the combination of the real scene picture and the virtual picture in the AR equipment.
2. The method according to claim 1, wherein the obtaining scene information of a real scene in which the AR device is currently located comprises:
acquiring the current geographical position information of the AR equipment, and taking the geographical position information as the scene information;
the determining that the scene information meets the condition of a target preset scene includes:
matching the geographical position information with a geographical position range corresponding to at least one preset scene in a preset scene library respectively;
and under the condition that the geographic position information is determined to be in the geographic position range corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene.
3. The method according to claim 1, wherein the obtaining scene information of a real scene in which the AR device is currently located comprises:
acquiring a real scene picture acquired by the AR equipment, performing attribute identification on an image area where an entity object is located in the real scene picture, and determining the attribute of the identified entity object as the scene information;
the determining that the scene information meets the condition of a target preset scene includes:
matching the entity object attribute with an attribute tag corresponding to at least one preset scene in a preset scene library respectively;
and under the condition that the entity object attribute is determined to be consistent with any attribute tag of the target preset scene, determining that the scene information meets the condition of the target preset scene.
4. The method according to claim 1, wherein the obtaining scene information of a real scene in which the AR device is currently located comprises:
acquiring a real scene picture acquired by the AR equipment;
recognizing a character area in the real scene picture, and determining the recognized character content as the scene information;
the determining that the scene information meets the condition of a target preset scene includes:
matching the recognized text content with text content respectively corresponding to at least one preset scene in a preset scene library;
and under the condition that the recognized text content is determined to be consistent with the text content corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene.
5. The method according to any one of claims 1 to 4, wherein the determining the virtual object information matching with the target preset scene comprises:
and searching virtual object information corresponding to the scene identification of the target preset scene from a preset virtual object library, wherein various kinds of virtual object information and scene identifications corresponding to various kinds of virtual object information are recorded in the virtual object library.
6. The method according to any one of claims 1 to 5, wherein the target preset scene and the matched virtual object information belong to the same specific subject.
7. An apparatus for presenting a virtual object, comprising:
the acquiring module is used for acquiring scene information of a current real scene of the AR equipment;
the first determining module is used for determining virtual object information matched with a target preset scene under the condition that the scene information is determined to meet the condition of the target preset scene;
a second determination module, configured to determine, based on the virtual object information, a virtual picture corresponding to the virtual object information;
and the display module is used for displaying the AR effect of the combination of the real scene picture and the virtual picture in the AR equipment.
8. The apparatus according to claim 7, wherein the obtaining module, when obtaining the scene information of the real scene where the AR device is currently located, is specifically configured to: acquiring the current geographical position information of the AR equipment, and taking the geographical position information as the scene information; the first determining module, when determining that the scene information satisfies a condition of a target preset scene, is specifically configured to: matching the geographical position information with a geographical position range corresponding to at least one preset scene in a preset scene library respectively; under the condition that the geographic position information is determined to be in a geographic position range corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene;
or, the obtaining module, when obtaining scene information of a current real scene of the AR device, is specifically configured to: acquiring a real scene picture acquired by the AR equipment, performing attribute identification on an image area where an entity object is located in the real scene picture, and determining the attribute of the identified entity object as the scene information; the first determining module, when determining that the scene information satisfies a condition of a target preset scene, is specifically configured to: matching the entity object attribute with an attribute tag corresponding to at least one preset scene in a preset scene library respectively; under the condition that the entity object attribute is determined to be consistent with any attribute tag of the target preset scene, determining that the scene information meets the condition of the target preset scene;
or, the obtaining module, when obtaining scene information of a current real scene of the AR device, is specifically configured to: acquiring a real scene picture acquired by the AR equipment; recognizing a character area in the real scene picture, and determining the recognized character content as the scene information; the first determining module, when determining that the scene information satisfies a condition of a target preset scene, is specifically configured to: matching the recognized text content with text content respectively corresponding to at least one preset scene in a preset scene library; and under the condition that the recognized text content is determined to be consistent with the text content corresponding to the target preset scene, determining that the scene information meets the condition of the target preset scene.
9. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor being configured to execute the machine-readable instructions stored in the memory, the machine-readable instructions, when executed by the processor, causing the processor to perform the steps of the method of presenting a virtual object according to any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when executed by a computer device, performs the steps of the method for presenting a virtual object according to any one of claims 1 to 6.
CN202010508202.2A 2020-06-05 2020-06-05 Virtual object display method and device, computer equipment and storage medium Pending CN111638796A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010508202.2A CN111638796A (en) 2020-06-05 2020-06-05 Virtual object display method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010508202.2A CN111638796A (en) 2020-06-05 2020-06-05 Virtual object display method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111638796A true CN111638796A (en) 2020-09-08

Family

ID=72328843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010508202.2A Pending CN111638796A (en) 2020-06-05 2020-06-05 Virtual object display method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111638796A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112130946A (en) * 2020-09-22 2020-12-25 西安宇视信息科技有限公司 Aircraft information display method and device, electronic equipment and storage medium
CN112150318A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Augmented reality information interaction method and device, electronic equipment and storage medium
CN112437090A (en) * 2020-11-27 2021-03-02 深圳市商汤科技有限公司 Resource loading method and device, electronic equipment and storage medium
CN112530219A (en) * 2020-12-14 2021-03-19 北京高途云集教育科技有限公司 Teaching information display method and device, computer equipment and storage medium
CN112684894A (en) * 2020-12-31 2021-04-20 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN112987921A (en) * 2021-02-19 2021-06-18 车智互联(北京)科技有限公司 VR scene explanation scheme generation method
CN113345108A (en) * 2021-06-25 2021-09-03 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113359984A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Bottle special effect presenting method and device, computer equipment and storage medium
CN113362474A (en) * 2021-06-28 2021-09-07 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113359983A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN113470187A (en) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 AR collection method, terminal, device and storage medium
CN113791750A (en) * 2021-09-24 2021-12-14 腾讯科技(深圳)有限公司 Virtual content display method and device and computer readable storage medium
CN113989470A (en) * 2021-11-15 2022-01-28 北京有竹居网络技术有限公司 Picture display method and device, storage medium and electronic equipment
CN114489337A (en) * 2022-01-24 2022-05-13 深圳市慧鲤科技有限公司 AR interaction method, device, equipment and storage medium
CN114567535A (en) * 2022-03-10 2022-05-31 北京鸿文汇智科技有限公司 Product interaction and fault diagnosis method based on augmented reality
CN114579029A (en) * 2022-03-22 2022-06-03 阿波罗智联(北京)科技有限公司 Animation display method and device, electronic equipment and storage medium
WO2022227421A1 (en) * 2021-04-26 2022-11-03 深圳市慧鲤科技有限公司 Method, apparatus, and device for playing back sound, storage medium, computer program, and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164518A (en) * 2013-03-06 2013-06-19 杭州九树网络科技有限公司 Mobile terminal (MT) augmented reality application system and method
CN107817897A (en) * 2017-10-30 2018-03-20 努比亚技术有限公司 A kind of information intelligent display methods and mobile terminal
CN110197532A (en) * 2019-06-05 2019-09-03 北京悉见科技有限公司 System, method, apparatus and the computer storage medium of augmented reality meeting-place arrangement
CN110286773A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Information providing method, device, equipment and storage medium based on augmented reality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164518A (en) * 2013-03-06 2013-06-19 杭州九树网络科技有限公司 Mobile terminal (MT) augmented reality application system and method
CN107817897A (en) * 2017-10-30 2018-03-20 努比亚技术有限公司 A kind of information intelligent display methods and mobile terminal
CN110197532A (en) * 2019-06-05 2019-09-03 北京悉见科技有限公司 System, method, apparatus and the computer storage medium of augmented reality meeting-place arrangement
CN110286773A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Information providing method, device, equipment and storage medium based on augmented reality

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112130946B (en) * 2020-09-22 2024-03-26 西安宇视信息科技有限公司 Airplane information display method and device, electronic equipment and storage medium
CN112130946A (en) * 2020-09-22 2020-12-25 西安宇视信息科技有限公司 Aircraft information display method and device, electronic equipment and storage medium
CN112150318A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Augmented reality information interaction method and device, electronic equipment and storage medium
CN112437090A (en) * 2020-11-27 2021-03-02 深圳市商汤科技有限公司 Resource loading method and device, electronic equipment and storage medium
CN112530219A (en) * 2020-12-14 2021-03-19 北京高途云集教育科技有限公司 Teaching information display method and device, computer equipment and storage medium
CN112684894A (en) * 2020-12-31 2021-04-20 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN112987921A (en) * 2021-02-19 2021-06-18 车智互联(北京)科技有限公司 VR scene explanation scheme generation method
CN112987921B (en) * 2021-02-19 2024-03-15 车智互联(北京)科技有限公司 VR scene explanation scheme generation method
WO2022227421A1 (en) * 2021-04-26 2022-11-03 深圳市慧鲤科技有限公司 Method, apparatus, and device for playing back sound, storage medium, computer program, and program product
CN113359983A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium
CN113359984A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Bottle special effect presenting method and device, computer equipment and storage medium
CN113345108A (en) * 2021-06-25 2021-09-03 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113345108B (en) * 2021-06-25 2023-10-20 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113362474A (en) * 2021-06-28 2021-09-07 北京市商汤科技开发有限公司 Augmented reality data display method and device, electronic equipment and storage medium
CN113470187A (en) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 AR collection method, terminal, device and storage medium
CN113791750B (en) * 2021-09-24 2023-12-26 腾讯科技(深圳)有限公司 Virtual content display method, device and computer readable storage medium
CN113791750A (en) * 2021-09-24 2021-12-14 腾讯科技(深圳)有限公司 Virtual content display method and device and computer readable storage medium
CN113989470A (en) * 2021-11-15 2022-01-28 北京有竹居网络技术有限公司 Picture display method and device, storage medium and electronic equipment
CN114489337A (en) * 2022-01-24 2022-05-13 深圳市慧鲤科技有限公司 AR interaction method, device, equipment and storage medium
CN114567535A (en) * 2022-03-10 2022-05-31 北京鸿文汇智科技有限公司 Product interaction and fault diagnosis method based on augmented reality
CN114567535B (en) * 2022-03-10 2024-01-09 北京鸿文汇智科技有限公司 Product interaction and fault diagnosis method based on augmented reality
CN114579029A (en) * 2022-03-22 2022-06-03 阿波罗智联(北京)科技有限公司 Animation display method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
KR20210047278A (en) AR scene image processing method, device, electronic device and storage medium
US9558559B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
JP5334911B2 (en) 3D map image generation program and 3D map image generation system
CN105517679B (en) Determination of the geographic location of a user
US10147399B1 (en) Adaptive fiducials for image match recognition and tracking
JP5652097B2 (en) Image processing apparatus, program, and image processing method
CN112684894A (en) Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN112074797A (en) System and method for anchoring virtual objects to physical locations
EP2981945A1 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
KR20150075532A (en) Apparatus and Method of Providing AR
CN113359986B (en) Augmented reality data display method and device, electronic equipment and storage medium
CN111640193A (en) Word processing method, word processing device, computer equipment and storage medium
WO2022262521A1 (en) Data presentation method and apparatus, computer device, storage medium, computer program product, and computer program
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN103106236A (en) Information registration device, information registration method, information registration system, information presentation device, informaton presentation method and informaton presentaton system
CN109522503B (en) Tourist attraction virtual message board system based on AR and LBS technology
JP2017085533A (en) Information processing system and information processing method
CN111651049B (en) Interaction method, device, computer equipment and storage medium
CN112947756A (en) Content navigation method, device, system, computer equipment and storage medium
CN112788443B (en) Interaction method and system based on optical communication device
CN111640190A (en) AR effect presentation method and apparatus, electronic device and storage medium
US20040125216A1 (en) Context based tagging used for location based services
CN111638792A (en) AR effect presentation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200908

RJ01 Rejection of invention patent application after publication