CN113973235A - Interactive information display method and device and computer equipment - Google Patents

Interactive information display method and device and computer equipment Download PDF

Info

Publication number
CN113973235A
CN113973235A CN202010709915.5A CN202010709915A CN113973235A CN 113973235 A CN113973235 A CN 113973235A CN 202010709915 A CN202010709915 A CN 202010709915A CN 113973235 A CN113973235 A CN 113973235A
Authority
CN
China
Prior art keywords
scene
interactive
information
image
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010709915.5A
Other languages
Chinese (zh)
Inventor
唐自信
王鹏
陈馥玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202010709915.5A priority Critical patent/CN113973235A/en
Publication of CN113973235A publication Critical patent/CN113973235A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Graphics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an interactive information display method, an interactive information display device, computer equipment and a readable storage medium, and belongs to the technical field of video display. The interactive information display method comprises the following steps: when an AR mode starting instruction triggered by a user is received, calling a camera of terminal equipment to acquire a first image, and determining a scene where the user is located currently according to the first image; acquiring first position information associated with the scene, and uploading the first position information to the server, so that the server determines interactive information associated with the scene according to the first position information and position information associated with each interactive content to be displayed; and receiving the scene-associated interactive information returned by the server, and displaying the interactive information in an AR manner in a screen of the terminal equipment, wherein the interactive information comprises at least one interactive content. According to the method and the device, the user can know the interactive information issued by other users conveniently, and the user experience is improved.

Description

Interactive information display method and device and computer equipment
Technical Field
The present application relates to the field of video display technologies, and in particular, to a method and an apparatus for displaying interactive information, a computer device, and a computer-readable storage medium.
Background
With the development of internet technology, users often browse various information on the internet, for example, when a user goes to a certain place to play or goes to a certain shop to shop, the user often searches for interactive information published by other users based on the place or the shop on the internet in advance, however, when the user searches for information related to the place or the shop, the found information is scattered, and the user is inconvenient to know the interactive information published by other users.
Disclosure of Invention
In view of the above, an interactive information display method, an interactive information display apparatus, a computer device and a computer-readable storage medium are provided to solve the problem that it is inconvenient for a user to know interactive information issued by other users in the prior art.
The application provides an interactive information display method, which comprises the following steps:
when an AR mode starting instruction triggered by a user is received, calling a camera of terminal equipment to acquire a first image, and determining a scene where the user is located currently according to the first image;
acquiring first position information associated with the scene, and uploading the first position information to the server, so that the server determines interactive information associated with the scene according to the first position information and position information associated with each interactive content to be displayed;
and receiving the scene-associated interactive information returned by the server, and displaying the interactive information in an AR manner in a screen of the terminal equipment, wherein the interactive information comprises at least one interactive content.
Optionally, the acquiring a first image by the camera of the calling terminal device, and determining the current scene where the user is located according to the first image includes:
calling a camera of the terminal equipment to acquire a first image;
judging whether a second image matched with the first image exists in a preset image library or not;
and if so, taking the scene associated with the second image as the scene where the user is currently located.
Optionally, the acquiring a first image by the camera of the calling terminal device, and determining the current scene where the user is located according to the first image includes:
calling a camera of the terminal equipment to acquire a first image, wherein the first image carries second position information;
judging whether third position information matched with the second position information exists in a plurality of preset third position information or not;
and if so, taking the scene associated with the matched third position information as the scene where the user is currently located.
Optionally, receiving the scene-related interaction information returned by the server, and displaying the interaction information in an AR manner on a screen of the terminal device, where the interaction information includes at least one interaction content including:
receiving the scene-associated interaction information returned by the server, and filtering the interaction information;
and displaying the filtered interactive information in a screen of the terminal equipment.
Optionally, the interactive information includes a plurality of first barrage, and displaying the plurality of first barrage in the screen of the terminal device includes:
acquiring attribute information of the first bullet screen;
determining a display form of the first barrage according to the attribute information;
and displaying the first barrage in a screen of the terminal equipment by adopting the display form.
Optionally, the interactive information includes a plurality of second barrages and videos with the barrages, and after the plurality of second barrages and the videos with the barrages are displayed in the screen of the terminal device, the method further includes:
acquiring a plurality of third bullet screens in the video with the bullet screens;
and comparing the third bullet curtains with the second bullet curtains, and deleting the same bullet curtains from the screen of the terminal equipment when the third bullet curtains and the second bullet curtains contain the same bullet curtains.
Optionally, the method further comprises:
and when a release instruction triggered by a user is received, acquiring interactive content released in the scene by the user, and uploading the released interactive content to the server, wherein the released interactive content carries position information.
The application also provides an interactive information display device, including:
the terminal equipment comprises a receiving module, a processing module and a display module, wherein the receiving module is used for calling a camera of the terminal equipment to collect a first image when an AR mode starting instruction triggered by a user is received, and determining a scene where the user is located currently according to the first image;
the acquisition module is used for acquiring first position information associated with the scene and uploading the first position information to the server so that the server determines interactive information associated with the scene according to the first position information and position information associated with each interactive content to be displayed;
and the display module is used for receiving the scene-related interactive information returned by the server and displaying the interactive information in an AR (augmented reality) manner in a screen of the terminal equipment, wherein the interactive information comprises at least one interactive content.
The present application further provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method.
The beneficial effects of the above technical scheme are that:
in the technical scheme, when an AR mode starting instruction triggered by a user is received, a camera of a terminal device is called to collect a first image, and the current scene where the user is located is determined according to the first image; acquiring first position information associated with the scene, and uploading the first position information to the server, so that the server determines interactive information associated with the scene according to the first position information and position information associated with each interactive content to be displayed; and receiving the scene-associated interaction information returned by the server. According to the method and the device, the interactive information associated with the current scene of the user is displayed through an Augmented Reality (AR) technology, and the interactive information associated with the current scene of the user is comments or videos which are issued by a plurality of other users and are associated with the scene, so that the interactive information issued by the other users can be conveniently known through the method of the embodiment of the application, and the user experience is improved.
Drawings
FIG. 1 is a block diagram of one embodiment of a system block diagram for interactive information display according to the present application;
FIG. 2 is a flowchart of an embodiment of an interactive information display method according to the present application;
FIG. 3 is a detailed flowchart of the steps of acquiring a first image by calling a camera of a terminal device and determining a scene where a user is currently located according to the first image;
FIG. 4 is a detailed flowchart of the steps of acquiring a first image by calling a camera of a terminal device and determining a scene where a user is currently located according to the first image;
FIG. 5 is a schematic view of an interactive message display of the present application;
fig. 6 is a detailed flowchart of the step of receiving the scene-related interaction information returned by the server and displaying the interaction information in the screen of the terminal device according to the present application;
fig. 7 is a detailed flowchart of the step of displaying the plurality of first barrages in the screen of the terminal device according to the present application;
FIG. 8 is a flowchart of another embodiment of an interactive information display method according to the present application;
FIG. 9 is a block diagram of an embodiment of an interactive information display device according to the present application;
fig. 10 is a schematic hardware structure diagram of a computer device for executing the interactive information display method according to an embodiment of the present application.
Detailed Description
The advantages of the present application are further illustrated below with reference to the accompanying drawings and specific embodiments.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the description of the present application, it should be understood that the numerical references before the steps do not identify the order of performing the steps, but merely serve to facilitate the description of the present application and to distinguish each step, and therefore should not be construed as limiting the present application.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment of an interactive information display method according to an embodiment of the present application. In an exemplary embodiment, the terminal device 2 may obtain data to the server 4 via the network 6. The terminal device 2 may be an electronic device having a data transmission function, such as a mobile phone or a tablet personal computer (tablet personal computer). The network 6 may be the internet.
Fig. 2 schematically shows a flowchart of an interactive information display method according to a first embodiment of the present application. The method is applied to the terminal device, and it can be understood that the flowchart in the method embodiment is not used to limit the order of executing the steps. The following description is made by taking a terminal device as an execution subject.
As shown in fig. 2, the interactive information display method includes steps S20 to S22, in which:
step S20, when an AR mode starting instruction triggered by a user is received, calling a camera of the terminal equipment to acquire a first image, and determining the current scene of the user according to the first image;
specifically, the user may trigger an AR (Augmented Reality) mode opening instruction by clicking an entity button on the terminal device, the user may trigger the AR mode opening instruction by clicking an AR mode opening control set on the terminal device, or the user may trigger the AR mode opening instruction by performing a sliding operation on a screen of the terminal device, for example, the user triggers the AR mode opening instruction by a pull-down operation.
In this embodiment, if the user is the first time trigger AR mode start instruction, before the camera of the terminal device is called to collect the first image, the user is required to authorize the operation of the terminal device calling the camera first, and after receiving an authorization operation approval instruction triggered by the user, the camera of the terminal device can be called to collect the first image. It can be understood that, when the user has previously authorized the terminal device to invoke the operation right of the camera, the user does not need to authorize the terminal device to invoke the operation right of the camera again when the user triggers the AR mode start instruction for the first time.
After the terminal device acquires the first image, the currently acquired first image can be identified to determine the current scene where the user is located. The scene refers to a current specific location of the user, for example, whether the user is in a certain shop, a certain store, or a certain scenic spot.
The camera is preferably a rear camera of a terminal device, and the terminal device is an electronic device with a camera, such as a mobile phone and an IPD.
In an embodiment, when the position of the user changes, the scene where the user is currently located is determined again according to the acquired first image at every preset time.
In an exemplary implementation manner, referring to fig. 3, the acquiring a first image by a camera of the calling terminal device, and determining a scene where the user is currently located according to the first image includes:
and step S30, calling a camera of the terminal equipment to acquire a first image.
Step S31, determining whether a second image matching the first image exists in a preset image library.
Specifically, an image or a 3D scene graph corresponding to each scene is stored in an image library in advance, in this embodiment, the image corresponding to each scene may be one or multiple images, where the image library may be stored in a local storage of the terminal device or in the server, and when the image library is stored in the server, the image library needs to be downloaded from the server to the local storage. Of course, when the image library is stored in the server, the image library may not be downloaded from the server to the local storage, but the first image is uploaded to the server when the camera captures the first image for identification, then the server determines whether a second image matching the first image exists in the preset image library, and then the server sends the determination result to the terminal device, so that the terminal device determines whether the second image matching the first image exists in the preset image library according to the determination result.
In an embodiment, when an image with a similarity greater than a preset threshold with the first image exists in a preset image library, it may be considered that a second image matching the first image exists in the image library. It should be noted that, when the similarity values of a plurality of images in the image library are greater than the preset threshold, the image with the largest similarity value may be used as the second image.
And step S32, if yes, taking the scene associated with the second image as the scene where the user is currently located.
Specifically, since the images stored in the image library are associated with the scene in advance, when it is determined that a second image matching the first image exists in the preset image library, the scene associated with the second image may be used as the scene where the user is currently located.
In the embodiment, the image library corresponding to the scene is established in advance, so that when the first image is acquired, the current scene of the user can be accurately identified according to the image library.
In another exemplary embodiment, referring to fig. 4, the acquiring a first image by a camera of the calling terminal device, and determining a scene where the user is currently located according to the first image includes:
and step S40, calling a camera of the terminal equipment to acquire a first image, wherein the first image carries second position information.
Specifically, when a camera of the terminal device is called to collect a first image, second position information is obtained through a positioning module in the terminal device, so that the collected first image can carry the second position information. In this embodiment, the second location information is longitude and latitude information acquired by the positioning module.
Step S41, it is determined whether there is third location information matching the second location information in a plurality of preset third location information.
And step S42, if yes, taking the scene associated with the matched third position information as the scene where the user is currently located.
Specifically, third location information corresponding to each scene is stored in advance, when the second location information is acquired, the second location information may be matched with each preset third location information, and when the second location information is matched with one of the third location information, a scene associated with the matched third location information may be used as a scene where the user is currently located.
For example, when the second location information is within a preset location range of the third location information, the second location information may be considered to be matched with the third location information. The preset position range may be set according to an actual situation, for example, if the preset position range is within 10 meters, it may be determined that the second position information is matched with the third position information when the second position information is within 10 meters of the third position information.
It should be noted that the third location information is also latitude and longitude information.
In this embodiment, since the third location information is associated with the scene in advance, when it is determined that there is third location information that matches the second location information, the scene associated with the third location information may be used as the current scene where the user is located.
In the embodiment, the incidence relation between the scene and the third position information is pre-established, so that when the third position information matched with the second position information is obtained, the current scene of the user can be accurately identified according to the third position information.
Step S21, obtaining first location information associated with the scene, and uploading the first location information to the server, so that the server determines the interaction information associated with the scene according to the location information associated with each interaction content to be displayed.
Specifically, each scene has a first location information associated therewith, and in this embodiment, the first location information associated with the scene is preferably latitude and longitude information of a specific address corresponding to the scene.
The server stores interactive contents issued by various users, and each interactive content can comprise position information associated with the interactive content. In this way, after the terminal device uploads the first location information to the server, the server may determine the interaction information associated with the scene according to the location information associated with each interactive content to be displayed, which is pre-stored in the server, of the first location information. And after determining the interactive information associated with the scene, the server returns the interactive information to the terminal equipment.
For example, when determining the interactive information associated with the scene, the server may sequentially match the first location information with location information associated with each interactive content to be displayed, and if the location information associated with the current interactive content to be displayed is within a preset location range of the first location information, may determine that the current interactive content to be displayed is one of the interactive information associated with the scene; if the position information associated with the interactive content to be currently displayed is not within the preset position range of the first position information, it can be determined that the interactive content to be currently displayed is not one of the interactive contents associated with the scene. The preset position range can be set according to actual conditions, for example, if the preset position range is within 10 meters, it can be determined that the interactive content to be displayed currently is one of the interactive information associated with the scene when the position information associated with the interactive content to be displayed currently is within 10 meters of the first position information. And after the matching operation of all the interactive contents to be displayed is completed, taking all the matched interactive contents to be displayed as the interactive information associated with the scene.
The interactive information may be videos, comments, articles, columns, and the like which are issued by a plurality of users in each scene and are related to the current scene, wherein each video or comment issued by a user is an interactive content. In a specific scene, the interaction information can be videos, comments, articles or bulletins which are published by the UP owner and other users in the current scene; the interactive information can also be video comments, articles or columns with local card punching information; the interactive information can also be videos, comments, articles or columns and the like of which the titles and content fields are related to the current scene.
In one embodiment, the interactive information may also be fixed content that is pre-established by the user, for example, a promotional video that is established by the user for a certain shop.
Step S22, receiving the scene-related interaction information returned by the server, and displaying the interaction information in an AR manner on a screen of the terminal device, where the interaction information includes at least one interaction content.
Specifically, since the terminal device is in the AR mode, when displaying the interactive information associated with the scene, the interactive information is displayed in the AR mode, and in a specific scene, the displayed interactive information is as shown in fig. 5.
In an embodiment, when the interactive content displayed in the screen includes a video, the user may click the video to jump to a playing page to view the complete content. When the interactive content displayed in the screen contains comments, the user can click the comments to view the comments or the related dynamic states.
In the embodiment, when an AR mode starting instruction triggered by a user is received, a camera of a terminal device is called to collect a first image, and a scene where the user is located at present is determined according to the first image; and acquiring interactive information associated with the scene from a server, and displaying the interactive information in a screen of the terminal equipment, wherein the interactive information comprises at least one interactive content. According to the method and the device, the interactive information associated with the current scene of the user is displayed through an Augmented Reality (AR) technology, and the interactive information associated with the current scene of the user is comments or videos which are issued by a plurality of other users and are associated with the scene, so that the interactive information issued by the other users can be conveniently known through the method of the embodiment of the application, and the user experience is improved.
In an exemplary embodiment, referring to fig. 6, receiving interaction information associated with the scene returned by the server, and displaying the interaction information in an AR manner in a screen of the terminal device, where the interaction information includes at least one interaction content, and the interaction content includes:
step S60, receiving the scene-related interaction information returned by the server, and filtering the interaction information;
and step S61, displaying the interaction information after the filtering processing in the screen of the terminal equipment.
Specifically, in this embodiment of the application, when there are a plurality of received interactive contents, because the interactive contents displayed on the screen are limited, the acquired interactive contents may be filtered before the interactive contents are displayed. In one embodiment, all the interactive contents may be filtered in the latest, most popular or most relevant manner, and then the filtered interactive contents are displayed.
In this embodiment, the interactive content is filtered, so that too much interactive content can be prevented from being displayed on the screen, and the user experience can be improved.
In an exemplary embodiment, referring to fig. 7, the interactive information may include a plurality of first barrage, and displaying the plurality of first barrage in a screen of the terminal device includes:
and step S70, acquiring the attribute information of the first bullet screen.
Specifically, the attribute information includes length information of the first bullet screen, relevance information of the first bullet screen and the scene, the number of repeated bullet screens of the first bullet screen, and the like.
Note that the bullet screen in this embodiment refers to comment information issued by the user.
And step S71, determining the display form of the first bullet screen according to the attribute information.
Specifically, the different attribute information corresponds to different presentation forms, for example, a first barrage with a length of 5 character strings is presented by using a yellow large font size, and a first barrage related to the current scene is presented by adding a background color.
In this embodiment, by presetting the presentation forms corresponding to various attribute information, when the first barrage is displayed, the corresponding presentation forms can be matched according to the attribute information of the first barrage.
In an exemplary scenario, for a first bullet: the front part is displayed by adopting a yellow large font size, and for another first bullet screen: the "co-creation B-site civilization community" is displayed in a manner including a background color, and specific display can be shown in fig. 5.
In another embodiment, the presentation form of the first barrage may also be determined according to attribute information of the first barrage and picture attribute information of the screen, where the picture attribute information includes screen size, screen direction, picture characteristics (such as color, etc.), and the like. Different screen sizes and screen directions are adopted, the display forms are different, for example, if the picture color is dark, the first bullet screen is displayed in light color; and for example, when the screen size is more than 5 inches, the first bullet screen can be displayed by adopting a large character size.
And step S72, displaying the first barrage in the screen of the terminal equipment by adopting the display form.
Specifically, after the presentation form of the first barrage is obtained, when the first barrage is displayed, the first barrage can be displayed in the screen of the terminal device by using the presentation form.
This embodiment shows through the first barrage to different attributes adoption different forms that show, has improved the variety that the barrage shows.
In an exemplary embodiment, referring to fig. 8, the interactive information may include a plurality of second barrages and videos with the barrages, and after the plurality of second barrages and the videos with the barrages are displayed in the screen of the terminal device, the method further includes:
and step S80, acquiring a plurality of third bullet screens in the video with the bullet screens.
Specifically, the video with the barrage is a video obtained after a video published by a user publishes the barrage in the video by a plurality of users.
In the embodiment of the invention, after the terminal device acquires the video with the bullet screen from the acquisition, a plurality of third bullet screens in the video with the bullet screen can be acquired by analyzing the video with the bullet screen.
Step S81, comparing the third bullet screens with the second bullet screens, and deleting the same bullet screen from the screen of the terminal device when the third bullet screens and the second bullet screens contain the same bullet screen.
Specifically, after the third bullet curtains and the second bullet curtains are obtained, in order to avoid displaying repeated bullet curtains in the screen, the third bullet curtains and the second bullet curtains may be compared to determine whether the same bullet curtain exists in the third bullet curtains and the second bullet curtains, and if so, the same bullet curtain may be deleted from the screen of the terminal device.
In this embodiment, the same bullet screen is deleted from the screen of the terminal device, so that more bullet screens can be displayed in the terminal device.
In an exemplary embodiment, the interactive information display method further includes:
and when a release instruction triggered by a user is received, acquiring interactive content released in the scene by the user, and uploading the released interactive content to the server, wherein the released interactive content carries position information.
Specifically, when the user views the interactive information in the current scene, the user may also publish the interactive content in the scene. When a user needs to issue interactive content, the user may input the interactive content in a pre-provided interactive content input window, and after the input of the interactive content is completed, trigger an issue instruction by clicking a preset issue control, for example, trigger an issue instruction by clicking an issue button on a screen.
After receiving a release instruction triggered by a user, the terminal device immediately acquires interactive content released in the scene by the user, and uploads the interactive content to the server so that the interactive content can be displayed to other users for watching.
It can be understood that, in this embodiment, in order to enable the server to provide the interactive content to other users for viewing, when the user distributes the interactive content, the interactive content carries the location information of the user.
In this embodiment, by providing the interactive content publishing function, the interactive content published by the user can be shared by other users for watching.
To facilitate an understanding of the present application, the application of the present application will be described in detail below with reference to specific embodiments.
Referring to fig. 5, a user opens a client side makeup APP or a webpage client side installed on a terminal device such as a mobile phone or a tablet computer, then starts an AR mode, the terminal device calls a rear camera to collect an image of a scene where the user is currently located, then performs image recognition on the collected image to recognize the scene where the user is currently located, after recognizing the scene where the user is located, the terminal device acquires interaction information associated with the scene from a server, and after acquiring the interaction information, displays the acquired interaction information on a screen of the terminal device in an AR manner so that the user can view the interaction information. When the position of the user changes, the terminal equipment can always determine the scene where the user is located according to the image collected by the current position of the user, and after the scene where the user is located is determined to change, the terminal equipment can acquire the interaction information associated with the changed scene of the user from the server, and then display the updated interaction information.
Fig. 9 is a block diagram schematically illustrating an interactive information display device according to an embodiment of the present application, which may be divided into one or more program modules, the one or more program modules being stored in a storage medium and executed by one or more processors to implement the embodiment of the present application. The program modules referred to in the embodiments of the present application refer to a series of computer program instruction segments capable of performing specific functions, and are more suitable for describing the execution process of the video processing system in the storage medium than the program itself.
As shown in fig. 9, the interactive information display device 900 may include a receiving module 901, an obtaining module 902, and a displaying module 903, wherein:
the receiving module 901 is configured to, when receiving an AR mode starting instruction triggered by a user, invoke a camera of a terminal device to acquire a first image, and determine a scene where the user is currently located according to the first image;
specifically, the user may trigger the AR mode opening instruction by clicking an entity button on the terminal device, the user may also trigger the AR mode opening instruction by clicking an AR mode opening control set on the terminal device, or the user may trigger the AR mode opening instruction by performing a sliding operation on a screen of the terminal device, for example, the user triggers the AR mode opening instruction by a pull-down operation.
In this embodiment, if the user is the first time trigger AR mode start instruction, before the camera of the terminal device is called to collect the first image, the user is required to authorize the operation of the terminal device calling the camera first, and after receiving an authorization operation approval instruction triggered by the user, the camera of the terminal device can be called to collect the first image. It can be understood that, when the user has previously authorized the terminal device to invoke the operation right of the camera, the user does not need to authorize the terminal device to invoke the operation right of the camera again when the user triggers the AR mode start instruction for the first time.
After the terminal device acquires the first image, the currently acquired first image can be identified to determine the current scene where the user is located. The scene refers to a current specific location of the user, for example, whether the user is in a certain shop, a certain store, or a certain scenic spot.
The camera is preferably a rear camera of the terminal device.
In an exemplary embodiment, the receiving module 901 is further configured to invoke a camera of the terminal device to capture a first image.
The receiving module 901 is further configured to determine whether a second image matching the first image exists in a preset image library.
Specifically, the image library stores the images corresponding to each scene in advance, in this embodiment, the image library corresponding to each scene may be one or multiple images, where the image library may be stored in a local storage of the terminal device or in the server, and when the image library is stored in the server, the image library needs to be downloaded from the server to the local storage first. Of course, when the image library is stored in the server, the image library may not be downloaded from the server to the local storage, but the first image is uploaded to the server when the camera captures the first image for identification, then the server determines whether a second image matching the first image exists in the preset image library, and then the server sends the determination result to the terminal device, so that the terminal device determines whether the second image matching the first image exists in the preset image library according to the determination result.
In an embodiment, when an image with a similarity greater than a preset threshold with the first image exists in a preset image library, it may be considered that a second image matching the first image exists in the image library. It should be noted that, when the similarity values of a plurality of images in the image library are greater than the preset threshold, the image with the largest similarity value may be used as the second image.
The receiving module 901 is further configured to, if a second image matching the first image exists, use a scene associated with the second image as a scene where the user is currently located.
Specifically, since the images stored in the image library are associated with the scene in advance, when it is determined that a second image matching the first image exists in the preset image library, the scene associated with the second image may be used as the scene where the user is currently located.
In the embodiment, the image library corresponding to the scene is established in advance, so that when the first image is acquired, the current scene of the user can be accurately identified according to the image library.
In another exemplary embodiment, the receiving module 901 is further configured to invoke a camera of the terminal device to acquire a first image, where the first image carries the second location information.
Specifically, when a camera of the terminal device is called to collect a first image, second position information is obtained through a positioning module in the terminal device, so that the collected first image can carry the second position information. In this embodiment, the second location information is longitude and latitude information acquired by the positioning module.
The receiving module 901 is further configured to determine whether third location information matching the second location information exists in a plurality of preset third location information.
The receiving module 901 is further configured to, if there is third location information that matches the second location information, use a scene associated with the matched third location information as a scene where the user is currently located.
Specifically, third location information corresponding to each scene is stored in advance, when the second location information is acquired, the second location information may be matched with each preset third location information, and when the second location information is matched with one of the third location information, a scene associated with the matched third location information may be used as a scene where the user is currently located.
For example, when the second location information is within a preset location range of the third location information, the second location information may be considered to be matched with the third location information. The preset position range may be set according to an actual situation, for example, if the preset position range is within 10 meters, it may be determined that the second position information is matched with the third position information when the second position information is within 10 meters of the third position information.
It should be noted that the third location information is also latitude and longitude information.
In this embodiment, since the third location information is associated with the scene in advance, when it is determined that there is third location information that matches the second location information, the scene associated with the third location information may be used as the current scene where the user is located.
In the embodiment, the incidence relation between the scene and the third position information is pre-established, so that when the third position information matched with the second position information is obtained, the current scene of the user can be accurately identified according to the third position information.
An obtaining module 902, configured to obtain first location information associated with the scene, and upload the first location information to the server, so that the server determines, according to the first location information and location information associated with each interactive content to be displayed, interactive information associated with the scene.
Specifically, each scene has a first location information associated therewith, and in this embodiment, the first location information associated with the scene is preferably latitude and longitude information of a specific address corresponding to the scene.
The server stores interactive contents issued by various users, and each interactive content can comprise position information associated with the interactive content. In this way, after the terminal device uploads the first location information to the server, the server may determine the interaction information associated with the scene according to the location information associated with each interactive content to be displayed, which is pre-stored in the server, of the first location information. And after determining the interactive information associated with the scene, the server returns the interactive information to the terminal equipment.
For example, when determining the interactive information associated with the scene, the server may sequentially match the first location information with location information associated with each interactive content to be displayed, and if the location information associated with the current interactive content to be displayed is within a preset location range of the first location information, may determine that the current interactive content to be displayed is one of the interactive information associated with the scene; if the position information associated with the interactive content to be currently displayed is not within the preset position range of the first position information, it can be determined that the interactive content to be currently displayed is not one of the interactive contents associated with the scene. The preset position range can be set according to actual conditions, for example, if the preset position range is within 10 meters, it can be determined that the interactive content to be displayed currently is one of the interactive information associated with the scene when the position information associated with the interactive content to be displayed currently is within 10 meters of the first position information. And after the matching operation of all the interactive contents to be displayed is completed, taking all the matched interactive contents to be displayed as the interactive information associated with the scene.
The interactive information may be videos, comments, articles, columns, and the like which are issued by a plurality of users in each scene and are related to the current scene, wherein each video or comment issued by a user is an interactive content. In a specific scene, the interaction information can be videos, comments, articles or bulletins which are published by the UP owner and other users in the current scene; the interactive information can also be video comments, articles or columns with local card punching information; the interactive information can also be videos, comments, articles or columns and the like of which the titles and content fields are related to the current scene.
In one embodiment, the interactive information may also be fixed content that is pre-established by the user, for example, a promotional video that is established by the user for a certain shop.
A display module 903, configured to receive the scene-related interaction information returned by the server, and display the interaction information in an AR manner in a screen of the terminal device, where the interaction information includes at least one interaction content.
Specifically, since the terminal device is in the AR mode, when displaying the interactive information associated with the scene, the interactive information is displayed in the AR mode, and in a specific scene, the displayed interactive information is as shown in fig. 5.
In an embodiment, when the interactive content displayed in the screen includes a video, the user may click the video to jump to a playing page to view the complete content. When the interactive content displayed in the screen contains comments, the user can click the comments to view the comments or the related dynamic states.
In the embodiment, when an AR mode starting instruction triggered by a user is received, a camera of a terminal device is called to collect a first image, and a scene where the user is located at present is determined according to the first image; and acquiring interactive information associated with the scene from a server, and displaying the interactive information in a screen of the terminal equipment, wherein the interactive information comprises at least one interactive content. According to the method and the device, the interactive information associated with the current scene of the user is displayed through an Augmented Reality (AR) technology, and the interactive information associated with the current scene of the user is comments or videos which are issued by a plurality of other users and are associated with the scene, so that the interactive information issued by the other users can be conveniently known through the method of the embodiment of the application, and the user experience is improved.
In an exemplary embodiment, the display module 903 is further configured to receive interaction information associated with the scene returned by the server, and perform filtering processing on the interaction information;
the obtaining module 902 is further configured to display the filtered interactive information in a screen of the terminal device.
Specifically, in this embodiment of the application, when there are a plurality of received interactive contents, because the interactive contents displayed on the screen are limited, the acquired interactive contents may be filtered before the interactive contents are displayed. In one embodiment, all the interactive contents may be filtered in the latest, most popular or most relevant manner, and then the filtered interactive contents are displayed.
In this embodiment, the interactive content is filtered, so that too much interactive content can be prevented from being displayed on the screen, and the user experience can be improved. .
In an exemplary embodiment, the interactive information may include a plurality of first barrage, and the display module 903 is further configured to obtain attribute information of the first barrage.
Specifically, the attribute information includes length information of the first bullet screen, relevance information of the first bullet screen and the scene, the number of repeated bullet screens of the first bullet screen, and the like.
Note that the bullet screen in this embodiment refers to comment information issued by the user.
The display module 903 is further configured to determine a presentation form of the first barrage according to the attribute information.
Specifically, the different attribute information corresponds to different presentation forms, for example, a first barrage with a length of 5 character strings is presented by using a yellow large font size, and a first barrage related to the current scene is presented by adding a background color.
In this embodiment, by presetting the presentation forms corresponding to various attribute information, when the first barrage is displayed, the corresponding presentation forms can be matched according to the attribute information of the first barrage.
In an exemplary scenario, for a first bullet: the front part is displayed by adopting a yellow large font size, and for another first bullet screen: the "co-creation B-site civilization community" is displayed in a manner including a background color, and specific display can be shown in fig. 5.
In another embodiment, the presentation form of the first barrage may also be determined according to attribute information of the first barrage and picture attribute information of the screen, where the picture attribute information includes screen size, screen direction, picture characteristics (such as color, etc.), and the like. Different screen sizes and screen directions are adopted, the display forms are different, for example, if the picture color is dark, the first bullet screen is displayed in light color; and for example, when the screen size is more than 5 inches, the first bullet screen can be displayed by adopting a large character size.
The display module 903 is further configured to display the first barrage in the screen of the terminal device in the presentation form.
Specifically, after the presentation form of the first barrage is obtained, when the first barrage is displayed, the first barrage can be displayed in the screen of the terminal device by using the presentation form.
This embodiment shows through the first barrage to different attributes adoption different forms that show, has improved the variety that the barrage shows.
In an exemplary embodiment, the interactive information may include a plurality of second barrage and a video with the barrage, and the interactive information device 900 further includes: the device comprises an acquisition module and a comparison module.
And the acquisition module is used for acquiring a plurality of third bullet screens in the video with the bullet screens.
Specifically, the video with the barrage is a video obtained after a video published by a user publishes the barrage in the video by a plurality of users.
In the embodiment of the invention, after the terminal device acquires the video with the bullet screen from the acquisition, a plurality of third bullet screens in the video with the bullet screen can be acquired by analyzing the video with the bullet screen.
And the comparison module is used for comparing the third bullet curtains with the second bullet curtains and deleting the same bullet curtains from the screen of the terminal equipment when the third bullet curtains and the second bullet curtains contain the same bullet curtains.
Specifically, after the third bullet curtains and the second bullet curtains are obtained, in order to avoid displaying repeated bullet curtains in the screen, the third bullet curtains and the second bullet curtains may be compared to determine whether the same bullet curtain exists in the third bullet curtains and the second bullet curtains, and if so, the same bullet curtain may be deleted from the screen of the terminal device.
In this embodiment, the same bullet screen is deleted from the screen of the terminal device, so that more bullet screens can be displayed in the terminal device.
In an exemplary embodiment, the interactive information display device 900 further includes: and an uploading module.
The uploading module is used for acquiring the interactive content issued in the scene by the user when receiving an issuing instruction triggered by the user, and uploading the issued interactive content to the server, wherein the issued interactive content carries position information.
Specifically, when the user views the interactive information in the current scene, the user may also publish the interactive content in the scene. When a user needs to issue interactive content, the user may input the interactive content in a pre-provided interactive content input window, and after the input of the interactive content is completed, trigger an issue instruction by clicking a preset issue control, for example, trigger an issue instruction by clicking an issue button on a screen.
After receiving a release instruction triggered by a user, the terminal device immediately acquires interactive content released in the scene by the user, and uploads the interactive content to the server so that the interactive content can be displayed to other users for watching.
It can be understood that, in this embodiment, in order to enable the server to provide the interactive content to other users for viewing, when the user distributes the interactive content, the interactive content carries the location information of the user.
In this embodiment, by providing the interactive content publishing function, the interactive content published by the user can be shared by other users for watching.
Fig. 10 schematically shows a hardware architecture diagram of a computer device suitable for implementing the interactive information display method according to a fourth embodiment of the present application. In the present embodiment, the computer device 10 is a device capable of automatically performing numerical calculation and/or information processing in accordance with a command set or stored in advance. For example, the server may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a rack server (including an independent server or a server cluster composed of a plurality of servers). As shown in fig. 10, computer device 10 includes at least, but is not limited to: the memory 910, processor 920, and network interface 930 may be communicatively linked to each other via a system bus. Wherein:
the memory 910 includes at least one type of computer-readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 910 may be an internal storage module of the computer device 10, such as a hard disk or a memory of the computer device 10. In other embodiments, the memory 910 may also be an external storage device of the computer device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device 10. Of course, memory 910 may also include both internal and external memory modules of computer device 10. In this embodiment, the memory 910 is generally used for storing an operating system installed in the computer device 10 and various application software, such as program codes of an interactive information display method. In addition, the memory 910 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 920 may be, in some embodiments, a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data Processing chip. The processor 920 is generally configured to control the overall operation of the computer device 10, such as performing control and processing related to data interaction or communication with the computer device 10. In this embodiment, the processor 920 is configured to execute program codes stored in the memory 910 or process data.
Network interface 930 may include a wireless network interface or a wired network interface, with network interface 930 typically being used to establish communication links between computer device 10 and other computer devices. For example, the network interface 930 is used to connect the computer device 10 to an external terminal via a network, establish a data transmission channel and a communication link between the computer device 10 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), or Wi-Fi.
It is noted that FIG. 10 only shows a computer device having components 910-930, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
In this embodiment, the interactive information display method stored in the memory 910 may be further divided into one or more program modules, and executed by one or more processors (in this embodiment, the processor 920) to complete the present application.
The present embodiment also provides a computer-readable storage medium on which a computer program is stored, the computer program, when executed by a processor, implementing the steps of the interactive information display method in the embodiments.
In this embodiment, the computer-readable storage medium includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the computer readable storage medium may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the computer readable storage medium may be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device. Of course, the computer-readable storage medium may also include both internal and external storage devices of the computer device. In this embodiment, the computer-readable storage medium is generally used for storing an operating system and various types of application software installed in the computer device, for example, the program code of the interactive information display method in the embodiment, and the like. Further, the computer-readable storage medium may also be used to temporarily store various types of data that have been output or are to be output.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different from that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. An interactive information display method is characterized by comprising the following steps:
when an AR mode starting instruction triggered by a user is received, calling a camera of terminal equipment to acquire a first image, and determining a scene where the user is located currently according to the first image;
acquiring first position information associated with the scene, and uploading the first position information to the server, so that the server determines interactive information associated with the scene according to the first position information and position information associated with each interactive content to be displayed;
and receiving the scene-associated interactive information returned by the server, and displaying the interactive information in an AR manner in a screen of the terminal equipment, wherein the interactive information comprises at least one interactive content.
2. The interactive information display method of claim 1, wherein the step of acquiring a first image by a camera of the calling terminal device and determining a scene where the user is currently located according to the first image comprises:
calling a camera of the terminal equipment to acquire a first image;
judging whether a second image matched with the first image exists in a preset image library or not;
and if so, taking the scene associated with the second image as the scene where the user is currently located.
3. The interactive information display method of claim 1, wherein the step of acquiring a first image by a camera of the calling terminal device and determining a scene where the user is currently located according to the first image comprises:
calling a camera of the terminal equipment to acquire a first image, wherein the first image carries second position information;
judging whether third position information matched with the second position information exists in a plurality of preset third position information or not;
and if so, taking the scene associated with the matched third position information as the scene where the user is currently located.
4. The method for displaying interactive information according to claim 1, wherein the receiving the interactive information related to the scene returned by the server and displaying the interactive information in an AR manner on a screen of the terminal device, the interactive information including at least one interactive content includes:
receiving the scene-associated interaction information returned by the server, and filtering the interaction information;
and displaying the filtered interactive information in a screen of the terminal equipment.
5. The interactive information display method of claim 1, wherein the interactive information comprises a plurality of first barrage, and displaying the plurality of first barrage in the screen of the terminal device comprises:
acquiring attribute information of the first bullet screen;
determining a display form of the first barrage according to the attribute information;
and displaying the first barrage in a screen of the terminal equipment by adopting the display form.
6. The interactive information display method according to claim 1, wherein the interactive information includes a plurality of second barrage and videos with barrage, and after the plurality of second barrage and videos with barrage are displayed in the screen of the terminal device, the method further comprises:
acquiring a plurality of third bullet screens in the video with the bullet screens;
and comparing the third bullet curtains with the second bullet curtains, and deleting the same bullet curtains from the screen of the terminal equipment when the third bullet curtains and the second bullet curtains contain the same bullet curtains.
7. The interactive information display method of any one of claims 1 to 6, wherein the method further comprises:
and when a release instruction triggered by a user is received, acquiring interactive content released in the scene by the user, and uploading the released interactive content to the server, wherein the released interactive content carries position information.
8. An interactive information display device, comprising:
the terminal equipment comprises a receiving module, a processing module and a display module, wherein the receiving module is used for calling a camera of the terminal equipment to collect a first image when an AR mode starting instruction triggered by a user is received, and determining a scene where the user is located currently according to the first image;
the acquisition module is used for acquiring first position information associated with the scene and uploading the first position information to the server so that the server determines interactive information associated with the scene according to the first position information and position information associated with each interactive content to be displayed;
and the display module is used for receiving the scene-related interactive information returned by the server and displaying the interactive information in an AR (augmented reality) manner in a screen of the terminal equipment, wherein the interactive information comprises at least one interactive content.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the interactive information display method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program, when being executed by a processor, realizes the steps of the interactive information display method of any one of claims 1 to 7.
CN202010709915.5A 2020-07-22 2020-07-22 Interactive information display method and device and computer equipment Pending CN113973235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010709915.5A CN113973235A (en) 2020-07-22 2020-07-22 Interactive information display method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010709915.5A CN113973235A (en) 2020-07-22 2020-07-22 Interactive information display method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN113973235A true CN113973235A (en) 2022-01-25

Family

ID=79584908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010709915.5A Pending CN113973235A (en) 2020-07-22 2020-07-22 Interactive information display method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN113973235A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412862A (en) * 2022-08-04 2022-11-29 广州市明道文化产业发展有限公司 Multi-role decentralized plot interaction method and device based on LBS (location based service) and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002410A (en) * 2012-11-21 2013-03-27 北京百度网讯科技有限公司 Augmented reality method and system for mobile terminals and mobile terminals
CN103105993A (en) * 2013-01-25 2013-05-15 腾讯科技(深圳)有限公司 Method and system for realizing interaction based on augmented reality technology
US20140253743A1 (en) * 2012-05-10 2014-09-11 Hewlett-Packard Development Company, L.P. User-generated content in a virtual reality environment
CN106095881A (en) * 2016-06-07 2016-11-09 惠州Tcl移动通信有限公司 Method, system and the mobile terminal of a kind of display photos corresponding information
CN106648322A (en) * 2016-12-21 2017-05-10 广州市动景计算机科技有限公司 Method of triggering interactive operation with virtual object and device and system
CN106982387A (en) * 2016-12-12 2017-07-25 阿里巴巴集团控股有限公司 It has been shown that, method for pushing and the device and barrage application system of barrage
CN107247510A (en) * 2017-04-27 2017-10-13 成都理想境界科技有限公司 A kind of social contact method based on augmented reality, terminal, server and system
CN107481327A (en) * 2017-09-08 2017-12-15 腾讯科技(深圳)有限公司 On the processing method of augmented reality scene, device, terminal device and system
CN111225287A (en) * 2019-11-27 2020-06-02 网易(杭州)网络有限公司 Bullet screen processing method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140253743A1 (en) * 2012-05-10 2014-09-11 Hewlett-Packard Development Company, L.P. User-generated content in a virtual reality environment
CN103002410A (en) * 2012-11-21 2013-03-27 北京百度网讯科技有限公司 Augmented reality method and system for mobile terminals and mobile terminals
CN103105993A (en) * 2013-01-25 2013-05-15 腾讯科技(深圳)有限公司 Method and system for realizing interaction based on augmented reality technology
CN106095881A (en) * 2016-06-07 2016-11-09 惠州Tcl移动通信有限公司 Method, system and the mobile terminal of a kind of display photos corresponding information
CN106982387A (en) * 2016-12-12 2017-07-25 阿里巴巴集团控股有限公司 It has been shown that, method for pushing and the device and barrage application system of barrage
CN106648322A (en) * 2016-12-21 2017-05-10 广州市动景计算机科技有限公司 Method of triggering interactive operation with virtual object and device and system
CN107247510A (en) * 2017-04-27 2017-10-13 成都理想境界科技有限公司 A kind of social contact method based on augmented reality, terminal, server and system
CN107481327A (en) * 2017-09-08 2017-12-15 腾讯科技(深圳)有限公司 On the processing method of augmented reality scene, device, terminal device and system
CN111225287A (en) * 2019-11-27 2020-06-02 网易(杭州)网络有限公司 Bullet screen processing method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412862A (en) * 2022-08-04 2022-11-29 广州市明道文化产业发展有限公司 Multi-role decentralized plot interaction method and device based on LBS (location based service) and storage medium
CN115412862B (en) * 2022-08-04 2024-04-30 广州市明道文化产业发展有限公司 Multi-role decentralization scenario interaction method and device based on LBS and storage medium

Similar Documents

Publication Publication Date Title
US12001475B2 (en) Mobile image search system
US20220038402A1 (en) Automated content curation and communication
US10270839B2 (en) Content collection navigation and autoforwarding
RU2688757C1 (en) Device and method for analyzing imported video
US11334768B1 (en) Ephemeral content management
JP2014524062A5 (en)
CN111800668B (en) Barrage processing method, barrage processing device, barrage processing equipment and storage medium
KR20230021144A (en) Machine learning-based image compression settings reflecting user preferences
CN104125510A (en) Display apparatus for providing recommendation information and method thereof
US11651560B2 (en) Method and device of displaying comment information, and mobile terminal
EP4080507A1 (en) Method and apparatus for editing object, electronic device and storage medium
CN113014993B (en) Picture display method, device, equipment and storage medium
CN109116718B (en) Method and device for setting alarm clock
CN113973235A (en) Interactive information display method and device and computer equipment
US10169849B2 (en) Contextual personalized focus for variable depth of field photographs on social networks
CN111104915B (en) Method, device, equipment and medium for peer analysis
CN109510752B (en) Information display method and device
KR20170114453A (en) Video processing apparatus using qr code
CN111352680A (en) Information recommendation method and device
US20200312375A1 (en) Information processing method and electronic device
CN109150967B (en) Photo album creating method and device and electronic equipment
CN114117161A (en) Display method and device
CN112837107A (en) Cross-platform commodity recommendation method and device and computer equipment
CN112463998A (en) Album resource processing method, apparatus, electronic device and storage medium
CN112948629A (en) Content distribution method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination