CN112689201A - Barrage information identification method, barrage information display method, server and electronic equipment - Google Patents

Barrage information identification method, barrage information display method, server and electronic equipment Download PDF

Info

Publication number
CN112689201A
CN112689201A CN201910990386.8A CN201910990386A CN112689201A CN 112689201 A CN112689201 A CN 112689201A CN 201910990386 A CN201910990386 A CN 201910990386A CN 112689201 A CN112689201 A CN 112689201A
Authority
CN
China
Prior art keywords
bullet screen
target
screen information
information
object information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910990386.8A
Other languages
Chinese (zh)
Other versions
CN112689201B (en
Inventor
易园林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910990386.8A priority Critical patent/CN112689201B/en
Priority to PCT/CN2020/120415 priority patent/WO2021073478A1/en
Publication of CN112689201A publication Critical patent/CN112689201A/en
Application granted granted Critical
Publication of CN112689201B publication Critical patent/CN112689201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a bullet screen information identification method, a bullet screen information display method, a server and electronic equipment. The method comprises the following steps: the method comprises the steps of obtaining an identification result determined after an electronic device identifies a target image area, and determining second object information from first object information included in a target video according to the identification result, so that the determined second object information not only belongs to the first object information in the target video, but also belongs to object information in the target image area, and therefore, target bullet screen information including at least one piece of second object information identified from stored bullet screen information is bullet screen information which is related to the object information in the target image area and related to the target video, and the target bullet screen information is sent to the electronic device, so that the electronic device can display the target bullet screen information, bullet screen information which is really interesting for a user can be pushed, and personalized requirements of the user on the bullet screen information are enriched and met.

Description

Barrage information identification method, barrage information display method, server and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a bullet screen information identification method, a bullet screen information display method, a server and electronic equipment.
Background
With the rapid increase in the number of electronic devices and the improvement in the speed and stability of mobile networks, the application of electronic devices that incorporate the features of mobile communication networks is inevitably a very important requirement for users.
At present, the user who uses electronic equipment is watching video in-process, if want to watch the bullet screen information, can see a large amount of bullet screen information after opening bullet screen switch, and these information are some comments or the groove of telling that other users who watch this video published, and the bullet screen information that every user sent all can appear at video specific time point, and these bullet screen information can increase the interdynamic between the user, also let watch the video and become more interesting.
However, when a user needs to shield the bullet screen which is not interested by the user, the user is generally required to directly turn off the bullet screen switch, and the interactive experience among the users is affected by directly turning off the bullet screen switch, and the enjoyment of watching videos is reduced. Therefore, how to do this is becoming an urgent technical problem to be solved in the field of personalized services.
Disclosure of Invention
The embodiment of the invention provides a bullet screen information identification method, a bullet screen information display method, a server and electronic equipment, and aims to solve the problems that bullet screen information really interested by a user cannot be pushed to the user in the prior art, and personalized requirements of the user on the bullet screen information are enriched and met.
In order to solve the technical problem, the invention is realized as follows:
according to a first aspect of the embodiments of the present invention, an embodiment of the present invention provides a bullet screen information identification method, which is applied to a server, and the method includes:
acquiring an identification result determined after an electronic device identifies a target image area, wherein the target image area is determined by the electronic device according to a first input of a user to a target video, and the target video is a video currently played on the electronic device;
determining second object information from first object information included in the target video according to the identification result;
identifying target bullet screen information from the stored bullet screen information, wherein the target bullet screen information comprises at least one piece of second object information;
and sending the target bullet screen information to the electronic equipment.
According to a second aspect of the embodiments of the present invention, an embodiment of the present invention further provides a bullet screen information display method, which is applied to an electronic device, and the method includes:
receiving a first input of a user to a target video played on the electronic equipment;
in response to the first input, determining a target frame image of the target video and determining a target image area on the target frame image;
identifying the target image area to obtain an identification result, and sending the identification result to a server, so that the server determines second object information from first object information included in the target video according to the identification result, and identifies target bullet screen information from stored bullet screen information, wherein the target bullet screen information includes at least one piece of second object information;
receiving the target bullet screen information sent by the server;
and displaying the target bullet screen information.
According to a third aspect of the embodiments of the present invention, an embodiment of the present invention further provides a server, including:
the first acquisition module is used for acquiring an identification result determined after an electronic device identifies a target image area, wherein the target image area is determined by the electronic device according to a first input of a user to a target video, and the target video is a video currently played on the electronic device;
the determining module is used for determining second object information from first object information included in the target video according to the identification result;
the identification module is used for identifying target bullet screen information from the stored bullet screen information, and the target bullet screen information comprises at least one piece of second object information;
and the sending module is used for sending the target bullet screen information to the electronic equipment.
According to a fourth aspect of the embodiments of the present invention, there is also provided an electronic device, including:
the first receiving module is used for receiving first input of a user to a target video played on the electronic equipment;
a determining module, configured to determine a target frame image of the target video in response to the first input, and determine a target image area on the target frame image;
the identification module is used for identifying the target image area, obtaining an identification result and sending the identification result to a server, so that the server determines second object information from first object information included in the target video according to the identification result and identifies target bullet screen information from stored bullet screen information, wherein the target bullet screen information comprises at least one piece of second object information;
the second receiving module is used for receiving the target bullet screen information sent by the server;
and the target bullet screen information display module is used for displaying the target bullet screen information.
According to a fifth aspect of the embodiments of the present invention, there is also provided a server, including: the bullet screen information identification method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the computer program realizes the steps of the bullet screen information identification method when being executed by the processor.
According to a sixth aspect of the embodiments of the present invention, there is also provided an electronic device, including: the bullet screen information display method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the computer program realizes the steps of the bullet screen information display method when being executed by the processor.
In the embodiment of the invention, the identification result determined after the electronic device identifies the target image area is obtained, and the second object information is determined from the first object information included in the target video according to the identification result, so that the determined second object information not only belongs to the first object information in the target video, but also belongs to the object information in the target image area, and therefore, the target bullet screen information including at least one piece of second object information identified from the stored bullet screen information is bullet screen information which is related to the object information in the target image area and related to the target video, that is, the target bullet screen information is target bullet screen information interested by the user. And the target bullet screen information is sent to the electronic equipment, so that the electronic equipment can display the target bullet screen information, the bullet screen information which is really interesting to the user can be pushed for the user, and the individual requirements of the user on the bullet screen information are enriched and met.
Drawings
Fig. 1 is a flowchart illustrating a bullet screen information identification method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a target video provided in an embodiment of the invention;
fig. 3 is a flowchart illustrating another bullet screen information identification method provided in an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a non-occluded area supplemented with an occluded area provided in an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a display of target bullet screen information associated with each of the categories provided in the embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating an object icon presentation provided in an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating a target bullet screen information sorting and highlighting process provided in an embodiment of the present invention;
fig. 8 illustrates a block diagram of a server 800 provided in an embodiment of the invention;
fig. 9 shows a block diagram of another server 900 provided in an embodiment of the invention;
fig. 10 shows a block diagram of an electronic device 1000 provided in an embodiment of the invention;
FIG. 11 illustrates a block diagram of another electronic device 1100 provided in an embodiment of the invention;
fig. 12 illustrates a block diagram of yet another electronic device 1200 provided in an embodiment of the invention;
fig. 13 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present invention will be described in detail with reference to specific examples.
Referring to fig. 1, a flowchart of a bullet screen information identification method provided in the embodiment of the present invention is shown, and is applied to a server, where the method specifically includes the following steps:
step 101, obtaining a recognition result determined after the electronic device recognizes a target image area, where the target image area is determined by the electronic device according to a first input of a user to a target video, and the target video is a video currently played on the electronic device.
The electronic device may include a smart phone, a tablet computer, a notebook computer, etc., and the above examples are only illustrative and the present invention is not limited thereto. The first input may be an operation of the user to scribe a line on the target video using a mouse to circle the target image area in the target video, or an operation of a finger to scribe a line on the target video to circle the target image area in the target video. The embodiment of the present invention does not limit the form of the first input.
For example, referring to fig. 2, a schematic diagram of a target video provided in an embodiment of the present invention is shown. An inner region of a circle 201 encircled by the circle 201 in fig. 2 is a target image region determined by the electronic device 202 according to a first input of the target video by the user, and the target image region is a closed region. The first input may be a single scribing operation or a plurality of scribing operations by the user, the number of the target image areas defined by the single scribing operation may be one, or may be multiple, for example, two target image areas defined by the number "8", that is, two closed areas are provided. The first input may also be a plurality of closed regions defined by a plurality of scribing operations, and the number of target image regions is not limited by the present invention. The user can define the image area of interest by the line drawing operation. It should be noted that a group ″) in fig. 2 represents screen information, which is delivered when the user views the target video.
The shape of the target image region may be a circular region as shown in fig. 2, or may be a closed region having another shape such as an elliptical region or a square region, which is not limited in the present invention.
The electronic equipment identifies the target image area to obtain an identification result, and the electronic equipment can send the identification result to the server, so that the server can obtain the identification result determined after the electronic equipment identifies the target image area. The recognition result includes object information included in the target image region. For example, the electronic device identifies the target image area in fig. 2, and the identification result is object information a, which may be the name of the object a.
And 102, determining second object information from the first object information included in the target video according to the identification result.
The first object information included in the target video may be a name of the object. The object is for example a person or an object etc. If the target video is a video with the duration of 40 minutes, the first object information included in the target video is the information of the objects in all the frame video images within the 40 minutes. Taking the example that the first object information includes object information a, object information B, object information C, object information D, and object information E, if the identification result determined in step 101 is object information a, the second object information determined in this step based on the identification result is object information a in the first object information. If the identification result determined in step 101 is the object information a and the object information F, the second object information determined in this step is the object information a in the first object information based on the identification result, that is, the determined second object information is the information belonging to the first object information.
And 103, identifying target bullet screen information from the stored bullet screen information, wherein the target bullet screen information comprises at least one piece of second object information.
The stored bullet screen information is bullet screen information sent to the server by a user watching a target video through the electronic device, and the server can store the bullet screen information and identify bullet screen information including at least one piece of second object information from the stored bullet screen information. For example, if the second object information determined in step 102 is the object information a, the bullet-screen information including the object information a is identified from the stored bullet-screen information. If the second object information determined in step 102 is object information a, object information B, and object information C, target bullet-screen information including at least one second object information is identified from the stored bullet-screen information.
To more clearly describe the above steps, the following description is given in conjunction with table 1. If the target video includes the first object information: the server stores 10 pieces of bullet screen information in total, and the determined second object information is the object information A, the object information B and the object information C. The bullet screen information 1 in the 10 pieces of bullet screen information includes object information a, the bullet screen information 2 includes object information B, the bullet screen information 3 includes object information B, the bullet screen information 4 includes object information a, the bullet screen information 5 includes object information C, the bullet screen information 6 includes object information B, the bullet screen information 7 includes object information a, the bullet screen information 8 includes object information a, the bullet screen information 9 includes object information E, and the bullet screen information 10 includes object information F. As shown in table 1 below, the first column is stored bullet screen information, the second column is object information included in the bullet screen information, the third column is determined second object information, the third column is target bullet screen information, and the fourth column is whether the bullet screen information is target bullet screen information, for example, since the bullet screen information 1 includes the object information a, it is recognized that the bullet screen information 1 is target bullet screen information, and it is also recognized that the bullet screen information 2, the bullet screen information 3, the bullet screen information 4, the bullet screen information 5, the bullet screen information 6, the bullet screen information 7, and the bullet screen information 8 are target bullet screen information.
Figure BDA0002238070690000071
TABLE 1
And step 104, sending the target bullet screen information to the electronic equipment.
In the bullet screen information identification method provided by this embodiment, the identification result determined after the electronic device identifies the target image area is obtained, and the second object information is determined from the first object information included in the target video according to the identification result, so that the determined second object information not only belongs to the first object information in the target video, but also belongs to the object information in the target image area, and therefore the target bullet screen information including at least one piece of second object information identified from the stored bullet screen information is the bullet screen information related to the object information in the target image area and related to the target video, that is, the target bullet screen information is the target bullet screen information interested by the user. And the target bullet screen information is sent to the electronic equipment, so that the electronic equipment can display the target bullet screen information, the bullet screen information which is really interesting to the user can be pushed for the user, and the individual requirements of the user on the bullet screen information are enriched and met.
Referring to fig. 3, a flowchart of another bullet screen information identification method provided in the embodiment of the present invention is shown.
Step 301, the electronic device receives a first input of a target video played on the electronic device by a user.
Step 302, the electronic device determines a target frame image of the target video and determines a target image area on the target frame image in response to the first input.
Wherein, in response to the first input, determining a target frame image of the target video and determining a target image area on the target frame image may be implemented by:
in response to a first input, determining a scribe trajectory corresponding to the first input;
determining a frame image of a target video corresponding to the end time of the first input, and taking the frame image of the target video corresponding to the end time of the first input as a target frame image;
under the condition that the scribing track is a non-closed area, supplementing the non-closed area into a closed area, and taking the closed area on the target frame image as a target image area;
and when the scribing locus is not the non-closed region, taking the closed region formed by the scribing locus as the target image region.
It should be noted that, by using the frame image of the target video corresponding to the end time of the first input as the target frame image, the integrity of the object information included in the target image region determined on the target frame image according to the scribing track can be ensured. For example, if the user performs a scribing operation for circling, if one frame of image when a half circle is drawn includes only an a object and one frame of image at the end of the entire circling includes an a object and a B object, and the user's scribing trajectory is a closed region including the a object and the B object. If an image of one frame when a half circle is drawn is taken as the target frame image, the object information included in the target image area may include only the a object, and thus the object information included in the target image area may not be complete.
However, when the scribe line trajectory is a non-closed region, the non-closed region needs to be supplemented as a closed region. For example, referring to fig. 4, fig. 4 shows a schematic diagram of supplementing a non-closed region into a closed region provided in an embodiment of the present invention, that is, supplementing a non-closed region corresponding to a scribed track into a closed region by using a dashed line in fig. 4.
And step 303, the electronic equipment identifies the target image area, obtains an identification result, and sends the identification result to the server.
And step 304, the server acquires bullet screen information of the target video.
Step 305, the server performs semantic analysis on the bullet screen information to obtain at least one keyword of the bullet screen information.
It should be noted that, before performing semantic analysis on the bullet screen information and obtaining at least one keyword of the bullet screen information in step 305, the following steps may also be included:
and removing the symbol information in the bullet screen information under the condition that the symbol information is included in the bullet screen information.
If the bullet screen information includes symbol information, such as punctuation marks, emoticons, and other symbol information, the symbol information may be removed first. If all the bullet screen information is symbolic information, semantic analysis on the bullet screen information is not needed after the symbolic information is removed. Aiming at the fact that a certain bullet screen information comprises more symbol information, only the residual effective character information is obtained after the symbol information is removed, and the speed of semantic analysis on the effective character information is improved.
Correspondingly, the semantic analysis is performed on the bullet screen information in step 305, and the obtaining of at least one keyword of the bullet screen information can be realized through the following steps:
and performing semantic analysis on the bullet screen information without the symbolic information to obtain at least one keyword of the bullet screen information.
The keyword is, for example, a name of the object.
And step 306, matching each keyword included in the bullet screen information with all the first object information.
In case that one of each keyword matches one of the third object information of all the first object information, performing step 307; in the case where each keyword does not match any of the third object information, step 308 is performed.
For example, if a keyword of bullet screen information includes: the name of object A (keyword 1), the name of object B (keyword 2), and so on, and the name of object C (keyword 3), are not repeated herein. The first object information includes object information a, object information B, object information C, object information D, and object information E, and then the keyword 1 in the bullet screen information matches with the object information a, and the keyword 2 matches with the object information B, where the object information a is a third object information in all the first object information, and the object information B is also a third object information in all the first object information.
For example, a case where each keyword included in the bullet screen information is matched with all the first object information will be described with reference to table 2 below. The 10 pieces of bullet screen information described in the above embodiments are taken as examples. After semantic analysis is carried out on the bullet screen information 1, a keyword of the bullet screen information 1 is obtained as the keyword 1; after semantic analysis is carried out on the bullet screen information 2, the obtained keyword of the bullet screen information 2 is the keyword 2; (ii) a After semantic analysis is carried out on the bullet screen information 3, the obtained keyword of the bullet screen information 3 is a keyword 2; after semantic analysis is carried out on the bullet screen information 4, the keyword of the bullet screen information 1 is obtained as the keyword 1; after semantic analysis is performed on the bullet screen information 5, the obtained keyword of the bullet screen information 5 is the keyword 3 (the name of the C object); after semantic analysis is carried out on the bullet screen information 6, the obtained keyword of the bullet screen information 6 is the keyword 2; after semantic analysis is carried out on the bullet screen information 7, the obtained keyword of the bullet screen information 7 is the keyword 1; after semantic analysis is carried out on the bullet screen information 8, a keyword of the bullet screen information 8 is obtained as a keyword 1; after semantic analysis is performed on the bullet screen information 9, the obtained keywords of the bullet screen information 9 are keywords 5 (names of E objects) keywords 5; after semantic analysis is performed on the bullet screen information 10, the keyword of the obtained bullet screen information 10 is the keyword 6 (name of the F object). As shown in table 2 below, the first column is the bullet screen information, the second column is the keyword of the bullet screen information, and the third column is all the first object information of the target video.
Figure BDA0002238070690000101
TABLE 2
The first object information may be a name of an object, for example, object information a is a name of object a, and object information B is a name of object B. Since the keyword is the name of the object and the first object information is also the name of the object, the keyword can be matched with the first object information. As can be seen from table 2 and the description above, the keyword 1 of the bullet screen information 1 matches with the object information a in all the first object information; the keyword 2 of the bullet screen information 2 is matched with the object information B in all the first object information; matching object information B in all first object information of the keywords 2 of the bullet screen information 3; matching object information A in all first object information of the keywords 1 of the bullet screen information 4; matching object information C in all first object information of the keywords 3 of the bullet screen information 5; whether other bullet screen information matches with one third object information of all the first object information is not illustrated. It should be noted that, since the keyword of the bullet screen information 10 is the keyword 6 (the name of the object F), any third object information does not exist in all the first object information and matches with the keyword 6.
Step 307, the server stores the third object information and the bullet screen information associated with the third object information.
As can be seen from table 2 and the description above, if the keyword 1 of the bullet screen information 1 matches with the object information a (third object information) in all the first object information, the object information a and the bullet screen information 1 associated with the object information a are stored; similarly, the keyword 2 of the bullet screen information 2 matches the object information B (third object information) in all the first object information, and whether each keyword in the bullet screen information matches one third object information in all the first object information is not described in detail by way of example, and specific reference may be specifically made to storing the third object information and the bullet screen information associated with the third object information as shown in table 3 below. Referring to table 3:
third object information And thirdObject information-associated bullet screen information
Object information A Barrage information 1
Object information B Barrage information 2
Object information B Barrage information 3
Object information A Barrage information 4
Object information C Barrage information 5
Object information B Barrage information 6
Object information A Barrage information 7
Object information A Barrage information 8
Object information E Barrage information 9
TABLE 3
Step 308, the server stores the irrelevant information and the bullet screen information associated with the irrelevant information.
Since the keyword included in the bullet screen information 10 is the keyword 6, and any third object information does not exist in all the first object information and matches with the keyword 6, it can be determined that the bullet screen information is irrelevant information, that is, the bullet screen information is irrelevant information to the object information in the target image area, and the irrelevant information and the bullet screen information 10 relevant to the irrelevant information are stored.
Step 309, the server obtains the identification result determined after the electronic device identifies the target image area.
In step 310, in case that the recognition result matches with at least one of all the first object information, the server takes the first object information matching with the recognition result as the second object information.
With reference to the description in the foregoing embodiment, taking the example that all the first object information includes the object information a, the object information B, the object information C, the object information D, and the object information E, if the identification result obtained by the server in step 309 is the object information a, at least one first object information matching the identification result in this step is the object information a in all the first object information, and then the object information a is taken as the second object information. If the identification result obtained by the server in step 309 is object information a, object information B, and object information C, at least one first object information matched with the identification result in this step includes object information a, object information B, and object information C among all the first object information, and object information a, object information B, and object information C are all regarded as second object information. If the identification result obtained by the server in step 309 is the object information F, none of the first object information exists any object information that matches the object information F.
And 311, in the case that the stored bullet screen information contains bullet screen information including at least one piece of second object information, the server takes the bullet screen information including at least one piece of second object information as target bullet screen information.
And the stored bullet screen information comprises bullet screen information related to the third object information and/or bullet screen information related to irrelevant information.
It should be noted that, because the third object information and the bullet screen information associated with the third object information are stored, that is, the third object information is associated with the bullet screen information, the server may directly determine whether the third object information associated with the bullet screen information includes at least one piece of second object information, and if the third information associated with the bullet screen information includes at least one piece of second object information, the bullet screen information associated with the third object information is used as the target bullet screen information, and the target bullet screen information may be determined without the server recognizing whether the bullet screen information includes at least one piece of second object information, so that the efficiency of recognizing the target bullet screen information is accelerated to a certain extent, and the target bullet screen information may be quickly sent to the electronic device for display.
For example, if the second object information is object information a, taking bullet screen information 1 in table 3 as an example, bullet screen information 1 is associated with the third object information (object information a), so that it can be directly determined from the association relationship between the two that the third object information (object information a) includes object information matching with the second object information (object information a), and thus bullet screen information 1 is target bullet screen information. After the second object information is determined, semantic analysis is not needed to be performed on the bullet screen information, and then which bullet screen information belongs to the target bullet screen information is judged, so that the efficiency of identifying the target bullet screen information is improved to a certain extent.
Specifically, it may be determined whether the stored bullet screen information includes bullet screen information associated with the third object information and/or bullet screen information associated with irrelevant information exists bullet screen information including at least one piece of second object information, and if there exists bullet screen information including at least one piece of second object information, the bullet screen information including at least one piece of second object information is used as target bullet screen information; and if the bullet screen information comprising at least one second object information does not exist, the target bullet screen information is not recognized.
It should be noted that, if all the stored bullet screen information is bullet screen information associated with irrelevant information, the server may not recognize the target bullet screen information, and the server may send a prompt message to the electronic device to prompt the user to perform the circling of an area on the target video again. If the stored bullet screen information comprises bullet screen information associated with the third object information, bullet screen information comprising at least one piece of second object information can be identified in the stored bullet screen information, and the identified bullet screen information comprising at least one piece of second object information is used as target bullet screen information. For example, the second object information includes object information a, object information B, and object information C, and the target bullet screen information is included according to table 3 above.
And step 312, if the second object information is multiple, the server classifies the target barrage information according to each piece of the second object information to obtain the target barrage information of each classification.
For example, if the second object information includes object information a, object information B, and object information C, the server may classify the target bullet screen information according to each piece of the object information, and obtain target bullet screen information of each class, as shown in table 4 below:
Figure BDA0002238070690000141
TABLE 4
It should be noted that, because some bullet screen information may include a plurality of object information, for example, object information a, object information B, and object information C, bullet screen information including a plurality of object information may be classified as object information a, object information B, or object information C, or may be classified as a class independently of object information a, object information B, and object information C.
As shown in table 4 above, the target bullet screen information of category 1 includes bullet screen information 1, bullet screen information 4, bullet screen information 7, and bullet screen information 8; the target bullet screen information of the category 2 comprises bullet screen information 2, bullet screen information 3 and bullet screen information 6; the target bullet screen information of category 3 includes bullet screen information 5.
Step 313, the server sends the target barrage information of each category to the electronic device.
Correspondingly, the electronic equipment receives target bullet screen information of each category sent by the server; and displaying the target bullet screen information of each category according to each category.
Referring to fig. 5, a schematic diagram of a target barrage information display associated with each category provided in the embodiment of the present invention is shown. A shown in fig. 5 is an object corresponding to the object information a (second object information), and target bullet screen information of category 1 is displayed on the right side of a. One bullet screen information is represented by a group [ ] and 4 items of bullet screen information (bullet screen information 1, bullet screen information 4, bullet screen information 7, and bullet screen information 8) are displayed on the right side of category 1, and the target bullet screen information of other categories is not described one by one.
The method can also comprise the following steps:
the server generates a thumbnail according to each second object information;
and the server sends the thumbnail to the electronic equipment so that the electronic equipment can display the thumbnail.
It should be noted that, the server may send the thumbnail to the electronic device after sending the target bullet screen information of each category to the electronic device; or before the target bullet screen information of each category is sent to the electronic equipment, a thumbnail can be sent to the electronic equipment; or when the target bullet screen information of each category is sent to the electronic equipment, the thumbnail is sent to the electronic equipment at the same time. The invention does not limit the time for the server to send the thumbnail to the electronic equipment.
Correspondingly, the electronic equipment receives the thumbnail sent by the server, and the thumbnail is generated by the server according to each second object information; displaying the thumbnail;
after displaying the thumbnail, the electronic device may receive a second input of the thumbnail by the user; and controlling the moving direction and the moving distance of the target bullet screen information of each category or hiding the thumbnail in response to the second input.
For example, as shown in fig. 5, the electronic device may display a thumbnail 501 as shown in the upper left corner of fig. 5, the second input is a corresponding input when the user drags the thumbnail 501 to slide up and down or slide left, and drags the thumbnail 501 to slide up, so that the target bullet-screen information of each category may slide up. If the thumbnail image 501 is dragged to slide down, the target bullet-screen information of each category may slide down. If the thumbnail 501 is dragged to slide to the left, the thumbnail 501 may be hidden. Note that the sliding distance of the target bullet-screen information of each category may be the same as the sliding distance of the thumbnail image 501. Through the step, the user can control the display position of the bullet screen in the target video or hide the thumbnail.
After displaying the thumbnail, the following steps can be further included:
receiving a third input of the thumbnail from the user; expanding an object icon corresponding to each category included in the thumbnail in response to a third input; receiving a fourth input of the first target object icon in all the object icons from the user; responding to the fourth input, adjusting the position of the target object icon in the thumbnail, and sequencing the target bullet screen information of each category according to the position of the object icon corresponding to each category in the thumbnail to obtain a sequencing result; and displaying the sequencing result.
Referring to fig. 6, which shows an object icon presentation diagram provided in an embodiment of the present invention, an operation that a user clicks a thumbnail 501 of fig. 5 with a third input is possible, and in response to the third input, the electronic device expands an object icon corresponding to each category included in the thumbnail 501, as shown in fig. 6: the object icon corresponding to category 1 is "a", the object icon corresponding to category 2 is "B", and the object icon corresponding to category 3 is "C".
Referring to fig. 7, a schematic diagram of target bullet screen information sorting and highlighting processing provided in the embodiment of the present invention is shown. The fourth input may be that the user drags the object icon to change the display position of the dragged object icon and to change the display order of the target barrage information of the category corresponding to the dragged object icon, as in fig. 7, when the user drags the object icon a to move to the position of the object icon B as in fig. 6, the positions of the object icon a and the object icon B are interchanged, and the positions of the target barrage information of category 1 and the target barrage information of category 2 are interchanged. That is, the target bullet screen information of the category 1 is changed from the first row as shown in fig. 6 to the second row as shown in fig. 7, and the target bullet screen information of the category 2 is changed from the second row as shown in fig. 6 to be arranged in the first row.
Wherein, after expanding the object icon corresponding to each category included in the thumbnail in response to the third input, the following steps may be further included:
receiving a fifth input of the user to a second target object icon in all the object icons;
and responding to the fifth input, determining a target category corresponding to the second target object icon, highlighting the target barrage information of the target category, and displaying the highlighted target barrage information of the target category.
The fifth input may be an operation of clicking an object icon expanded from the thumbnail by the user, as shown in fig. 7, when the user clicks the object icon a, the electronic device may determine that the target category corresponding to the object icon a is category 1, may perform highlighting processing on the target bullet screen information of the category 1, and display the target bullet screen information of the highlighted target category. The target bullet screen information of the target category is highlighted, for example, the font of the target bullet screen information is thickened, enlarged or changed in color. Fig. 7 shows a highlighting process of thickening the font of the target bullet-screen information of the category 1.
Referring to fig. 8, a block diagram of a server 800 provided in an embodiment of the present invention is shown. The server 800 includes:
a first obtaining module 810, configured to obtain a recognition result determined after an electronic device recognizes a target image area, where the target image area is determined by the electronic device according to a first input of a target video by a user, and the target video is a video currently played on the electronic device;
a determining module 820, configured to determine second object information from first object information included in the target video according to the recognition result;
an identifying module 830, configured to identify target bullet screen information from the stored bullet screen information, where the target bullet screen information includes at least one piece of second object information;
a sending module 840, configured to send the target barrage information to the electronic device.
Referring to fig. 9, which shows a block diagram of another server 900 provided in the embodiment of the present invention, the server 900 may further include:
a second obtaining module 910, configured to obtain barrage information of the target video;
an analysis module 920, configured to perform semantic analysis on the bullet screen information to obtain at least one keyword of the bullet screen information;
a matching module 930, configured to match each keyword included in the bullet screen information with all the first object information;
a storage module 940, configured to store the third object information and the bullet screen information associated with the third object information when one keyword in each keyword matches with one third object information in all the first object information.
Optionally, the server 900 may further include:
a removing module 950, configured to remove symbol information from the bullet screen information when the bullet screen information includes the symbol information;
correspondingly, the analysis module 920 is specifically configured to perform semantic analysis on the bullet screen information from which the symbolic information is removed, so as to obtain at least one keyword of the bullet screen information.
Optionally, the storage module 940 is further configured to store irrelevant information and the bullet screen information associated with the irrelevant information when each keyword is not matched with any of the third object information.
Optionally, the identifying module 830 is specifically configured to, in a case that there is bullet screen information including at least one piece of second object information in the stored bullet screen information, use the bullet screen information including at least one piece of second object information as the target bullet screen information; the stored bullet screen information comprises bullet screen information related to the third object information and/or bullet screen information related to the irrelevant information.
Optionally, the determining module 820 is specifically configured to, when the identification result matches at least one of all the first object information, use the first object information matching the identification result as the second object information.
Optionally, the server 900 may further include:
a category classification module 960, configured to classify the target bullet screen information according to each of the two pieces of object information to obtain target bullet screen information of each of the categories if the second object information is multiple;
correspondingly, the sending module 840 is specifically configured to send the target barrage information of each category to the electronic device.
Optionally, the sending module 840 is further configured to generate a thumbnail according to each piece of the second object information; and sending the thumbnail to the electronic equipment so that the electronic equipment can display the thumbnail.
Referring to fig. 10, a block diagram of an electronic device 1000 provided in an embodiment of the invention is shown.
The electronic device 1000 includes:
a first receiving module 1010, configured to receive a first input of a target video played on the electronic device from a user;
a determining module 1020, configured to determine a target frame image of the target video and determine a target image area on the target frame image in response to the first input;
the identifying module 1030 is configured to identify the target image area, obtain an identification result, and send the identification result to a server, so that the server determines, according to the identification result, second object information from first object information included in the target video, and identifies target bullet screen information from stored bullet screen information, where the target bullet screen information includes at least one piece of the second object information;
a second receiving module 1040, configured to receive the target barrage information sent by the server;
and a target bullet screen information display module 1050 for displaying the target bullet screen information.
Optionally, the determining module 1020 is specifically configured to determine, in response to the first input, a scribing trajectory corresponding to the first input;
determining a frame image of the target video corresponding to the end time of the first input, and taking the frame image of the target video corresponding to the end time of the first input as the target frame image;
when the scribing track is the non-closed area, supplementing the non-closed area to a closed area, and taking the closed area on the target frame image as the target image area;
and in the case that the scribing track is not the non-closed area, taking a closed area formed by the scribing track as the target image area.
Optionally, the second receiving module 1040 is specifically configured to receive target bullet screen information of each category sent by the server, where the target bullet screen information of each category is obtained by classifying the target bullet screen information according to each piece of the two-object information by the server; and displaying the target bullet screen information of each category according to each category.
Optionally, referring to fig. 11, which shows a block diagram of another electronic device 1100 provided in the embodiment of the present invention, the electronic device 1100 may further include:
a third receiving module 1110, configured to receive a thumbnail sent by the server, where the thumbnail is generated by the server according to each piece of the second object information;
a thumbnail display module 1120 for displaying the thumbnail;
a fourth receiving module 1130, configured to receive a second input of the thumbnail from the user;
a control module 1140, configured to control a moving direction and a moving distance of the target barrage information of each of the categories or hide the thumbnail in response to the second input.
Optionally, referring to fig. 12, a block diagram of another electronic device 1200 provided in the embodiment of the present invention is shown, where the electronic device 1200 may further include:
a fifth receiving module 1210, configured to receive a third input to the thumbnail from the user;
an object icon expansion module 1220 for expanding, in response to the third input, an object icon included in the thumbnail image corresponding to each of the categories;
a sixth receiving module 1230, configured to receive a fourth input of the user on a first target object icon of all the object icons;
the sorting module 1240 is configured to, in response to the fourth input, adjust the position of the target object icon in the thumbnail, and sort the target bullet screen information of each category according to the position of the object icon corresponding to each category in the thumbnail, so as to obtain a sorting result;
a sorting result display module 1250 configured to display the sorting result.
Optionally, the electronic device 1200 may further include:
a seventh receiving module 1260, configured to receive a fifth input of the second target object icon from among all the object icons by the user;
a processing module 1270, configured to, in response to the fifth input, determine a target category corresponding to the second target object icon, perform highlighting processing on the target barrage information of the target category, and display the highlighted target barrage information of the target category.
Figure 13 is a hardware configuration diagram of an electronic device implementing various embodiments of the invention,
the electronic device 1300 includes, but is not limited to: a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309, a processor 1310, a power supply 1311, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 13 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 1310 is configured to receive a first input of a target video played on the electronic device from a user;
in response to the first input, determining a target frame image of the target video and determining a target image area on the target frame image;
identifying the target image area to obtain an identification result, and sending the identification result to a server, so that the server determines second object information from first object information included in the target video according to the identification result, and identifies target bullet screen information from stored bullet screen information, wherein the target bullet screen information includes at least one piece of second object information;
receiving the target bullet screen information sent by the server;
and displaying the target bullet screen information.
In the embodiment of the invention, the target image area on the target frame image is determined, the target image area is identified to obtain the identification result, the identification result is sent to the server, so that the server determines second object information from first object information included in the target video according to the identification result, and identifies target bullet screen information from stored bullet screen information, wherein the target bullet screen information includes at least one piece of second object information, and the target bullet screen information sent by the server is received and displayed. Therefore, the target bullet screen information related to the object information of the target image area interested by the user is displayed, and the individual requirements of the user are met.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1301 may be configured to receive and transmit signals during a message transmission or call process, and specifically, receive downlink data from a base station and then process the received downlink data to the processor 1310; in addition, the uplink data is transmitted to the base station. In general, radio unit 1301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 1301 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 1302, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 1303 can convert audio data received by the radio frequency unit 1301 or the network module 1302 or stored in the memory 1309 into an audio signal and output as sound. Also, the audio output unit 1303 may also provide audio output related to a specific function performed by the electronic apparatus 1300 (e.g., a call signal reception sound, a message reception sound, and the like). The audio output unit 1303 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1304 is used to receive audio or video signals. The input Unit 1304 may include a Graphics Processing Unit (GPU) 13041 and a microphone 13042, and the Graphics processor 13041 processes image data of still pictures or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 1306. The image frames processed by the graphic processor 13041 may be stored in the memory 1309 (or other storage medium) or transmitted via the radio frequency unit 1301 or the network module 1302. The microphone 13042 can receive sounds and can process such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1301 in case of a phone call mode.
The electronic device 1300 also includes at least one sensor 1305, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 13061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 13061 and/or the backlight when the electronic device 1300 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1305 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 1306 is used to display information input by a user or information provided to the user. The Display unit 1306 may include a Display panel 13061, and the Display panel 13061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1307 may be used to receive input numerical or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 1307 includes a touch panel 13071 and other input devices 13072. Touch panel 13071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on touch panel 13071 or near touch panel 13071 using a finger, stylus, or any other suitable object or attachment). The touch panel 13071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1310, and receives and executes commands sent from the processor 1310. In addition, the touch panel 13071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 1307 may include other input devices 13072 in addition to the touch panel 13071. In particular, the other input devices 13072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 13071 can be overlaid on the display panel 13061, and when the touch panel 13071 detects a touch operation on or near the touch panel, the touch operation can be transmitted to the processor 1310 to determine the type of the touch event, and then the processor 1310 can provide a corresponding visual output on the display panel 13061 according to the type of the touch event. Although the touch panel 13071 and the display panel 13061 are shown in fig. 13 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 13071 and the display panel 13061 may be integrated to implement the input and output functions of the electronic device, and are not limited herein.
The interface unit 1308 is an interface for connecting an external device to the electronic apparatus 1300. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1308 may be used to receive input from an external device (e.g., data information, power, etc.) and transmit the received input to one or more elements within the electronic device 1300 or may be used to transmit data between the electronic device 1300 and an external device.
The memory 1309 may be used to store software programs as well as various data. The memory 1309 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1309 can include high-speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1310 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 1309 and calling data stored in the memory 1309, thereby performing overall monitoring of the electronic device. Processor 1310 may include one or more processing units; preferably, the processor 1310 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1310.
The electronic device 1300 may also include a power supply 1311 (e.g., a battery) for powering the various components, and preferably, the power supply 1311 may be logically coupled to the processor 1313 via a power management system that may enable managing charging, discharging, and power consumption by the power management system.
In addition, the electronic device 1300 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, including a processor 1310, a memory 1309, and a computer program stored in the memory 1309 and capable of running on the processor 1310, where the computer program, when executed by the processor 1310, implements each process of the foregoing bullet screen information identification method embodiment, and can achieve the same technical effect, and in order to avoid repetition, no further description is given here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the foregoing bullet screen information identification method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the detailed description is omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. A bullet screen information identification method is applied to a server and is characterized by comprising the following steps:
acquiring an identification result determined after an electronic device identifies a target image area, wherein the target image area is determined by the electronic device according to a first input of a user to a target video, and the target video is a video currently played on the electronic device;
determining second object information from first object information included in the target video according to the identification result;
identifying target bullet screen information from the stored bullet screen information, wherein the target bullet screen information comprises at least one piece of second object information;
and sending the target bullet screen information to the electronic equipment.
2. The method of claim 1, wherein prior to obtaining the recognition result determined after the electronic device recognizes the target image region, further comprising:
acquiring bullet screen information of the target video;
performing semantic analysis on the bullet screen information to obtain at least one keyword of the bullet screen information;
matching each keyword included in the bullet screen information with all the first object information;
and in the case that one keyword in each keyword matches one third object information in all the first object information, storing the third object information and the bullet screen information associated with the third object information.
3. The method according to claim 2, before said semantically analyzing said bullet screen information to obtain at least one keyword of said bullet screen information, further comprising:
removing the symbol information in the bullet screen information under the condition that the bullet screen information comprises the symbol information;
the semantic analysis is performed on the bullet screen information to obtain at least one keyword of the bullet screen information, and the semantic analysis comprises the following steps:
and performing semantic analysis on the bullet screen information without the symbolic information to obtain at least one keyword of the bullet screen information.
4. The method of claim 3, further comprising:
and under the condition that each keyword is not matched with any third object information, storing irrelevant information and the barrage information related to the irrelevant information.
5. The method of claim 4, wherein identifying target bullet screen information from the stored bullet screen information comprises:
taking bullet screen information including at least one piece of second object information as the target bullet screen information under the condition that bullet screen information including at least one piece of second object information exists in the stored bullet screen information; the stored bullet screen information comprises bullet screen information related to the third object information and/or bullet screen information related to the irrelevant information.
6. The method of claim 1, wherein determining second object information from first object information included in the target video according to the recognition result comprises:
and in the case that the recognition result matches with at least one of all the first object information, taking the first object information matching with the recognition result as the second object information.
7. The method according to claim 1, wherein before said sending the target barrage information to the terminal device electronic device, further comprising:
if the second object information is multiple, classifying the target bullet screen information according to each second object information to obtain target bullet screen information of each category;
sending the target bullet screen information to the terminal equipment electronic equipment, wherein the target bullet screen information comprises:
and sending the target bullet screen information of each category to the terminal equipment electronic equipment.
8. A bullet screen information display method is applied to electronic equipment and is characterized by comprising the following steps:
receiving a first input of a user to a target video played on the electronic equipment;
in response to the first input, determining a target frame image of the target video and determining a target image area on the target frame image;
identifying the target image area to obtain an identification result, and sending the identification result to a server, so that the server determines second object information from first object information included in the target video according to the identification result, and identifies target bullet screen information from stored bullet screen information, wherein the target bullet screen information includes at least one piece of second object information;
receiving the target bullet screen information sent by the server;
and displaying the target bullet screen information.
9. The method of claim 8, wherein determining a target frame image of the target video and determining a target image area on the target frame image in response to the first input comprises:
in response to the first input, determining a scribe trajectory corresponding to the first input;
determining a frame image of the target video corresponding to the end time of the first input, and taking the frame image of the target video corresponding to the end time of the first input as the target frame image;
when the scribing track is the non-closed area, supplementing the non-closed area to a closed area, and taking the closed area on the target frame image as the target image area;
and in the case that the scribing track is not the non-closed area, taking a closed area formed by the scribing track as the target image area.
10. The method of claim 9, wherein the receiving the target barrage information sent by the server comprises:
receiving target bullet screen information of each category sent by the server, wherein the target bullet screen information of each category is obtained by classifying the target bullet screen information according to each piece of the two-object information by the server;
and displaying the target bullet screen information of each category according to each category.
11. A server, comprising:
the first acquisition module is used for acquiring an identification result determined after an electronic device identifies a target image area, wherein the target image area is determined by the electronic device according to a first input of a user to a target video, and the target video is a video currently played on the electronic device;
the determining module is used for determining second object information from first object information included in the target video according to the identification result;
the identification module is used for identifying target bullet screen information from the stored bullet screen information, and the target bullet screen information comprises at least one piece of second object information;
and the sending module is used for sending the target bullet screen information to the electronic equipment.
12. An electronic device, comprising:
the first receiving module is used for receiving first input of a user to a target video played on the electronic equipment;
a determining module, configured to determine a target frame image of the target video in response to the first input, and determine a target image area on the target frame image;
the identification module is used for identifying the target image area, obtaining an identification result and sending the identification result to a server, so that the server determines second object information from first object information included in the target video according to the identification result and identifies target bullet screen information from stored bullet screen information, wherein the target bullet screen information comprises at least one piece of second object information;
the second receiving module is used for receiving the target bullet screen information sent by the server;
and the target bullet screen information display module is used for displaying the target bullet screen information.
13. A server, comprising: memory, processor and computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the bullet screen information identification method according to any one of claims 1 to 7.
14. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the bullet screen information display method according to any one of claims 8 to 10.
CN201910990386.8A 2019-10-17 2019-10-17 Barrage information identification method, barrage information display method, server and electronic equipment Active CN112689201B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910990386.8A CN112689201B (en) 2019-10-17 2019-10-17 Barrage information identification method, barrage information display method, server and electronic equipment
PCT/CN2020/120415 WO2021073478A1 (en) 2019-10-17 2020-10-12 Bullet screen information recognition method, display method, server and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910990386.8A CN112689201B (en) 2019-10-17 2019-10-17 Barrage information identification method, barrage information display method, server and electronic equipment

Publications (2)

Publication Number Publication Date
CN112689201A true CN112689201A (en) 2021-04-20
CN112689201B CN112689201B (en) 2022-08-26

Family

ID=75444635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910990386.8A Active CN112689201B (en) 2019-10-17 2019-10-17 Barrage information identification method, barrage information display method, server and electronic equipment

Country Status (2)

Country Link
CN (1) CN112689201B (en)
WO (1) WO2021073478A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115065874A (en) * 2022-06-20 2022-09-16 维沃移动通信有限公司 Video playing method and device, electronic equipment and readable storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395567B (en) * 2021-06-11 2022-07-05 腾讯科技(深圳)有限公司 Subtitle display method and related device
CN114245222A (en) * 2021-12-16 2022-03-25 网易(杭州)网络有限公司 Bullet screen display method and device, electronic equipment and medium
CN114915832B (en) * 2022-05-13 2024-02-23 咪咕文化科技有限公司 Barrage display method and device and computer readable storage medium
CN115103212B (en) * 2022-06-10 2023-09-05 咪咕文化科技有限公司 Bullet screen display method, bullet screen processing device and electronic equipment
CN115297355B (en) * 2022-08-02 2024-01-23 北京奇艺世纪科技有限公司 Barrage display method, barrage generation method, barrage display device, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811816A (en) * 2015-04-29 2015-07-29 北京奇艺世纪科技有限公司 Video image object bullet screen marking method, device and system
CN105357586A (en) * 2015-09-28 2016-02-24 北京奇艺世纪科技有限公司 Video bullet screen filtering method and device
US20170251240A1 (en) * 2015-01-20 2017-08-31 Tencent Technology (Shenzhen) Company Limited Bullet screen information processing method, client, service platform and storage medium
CN107222790A (en) * 2017-05-22 2017-09-29 深圳市金立通信设备有限公司 A kind of method, terminal and computer-readable recording medium for sending barrage
CN107613392A (en) * 2017-09-22 2018-01-19 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN107645686A (en) * 2017-09-22 2018-01-30 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN108259968A (en) * 2017-12-13 2018-07-06 华为技术有限公司 Processing method, system and the relevant device of video barrage
CN108347640A (en) * 2017-01-22 2018-07-31 北京康得新创科技股份有限公司 Information processing method based on video and device
CN108495168A (en) * 2018-03-06 2018-09-04 优酷网络技术(北京)有限公司 The display methods and device of barrage information
CN108632658A (en) * 2018-03-14 2018-10-09 维沃移动通信有限公司 A kind of barrage display methods, terminal
CN109739990A (en) * 2019-01-04 2019-05-10 北京七鑫易维信息技术有限公司 Information processing method and terminal
CN109819280A (en) * 2017-11-22 2019-05-28 上海全土豆文化传播有限公司 Barrage methods of exhibiting and device
CN110139134A (en) * 2019-05-10 2019-08-16 韶关市启之信息技术有限公司 A kind of personalization barrage intelligently pushing method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5224839B2 (en) * 2008-02-07 2013-07-03 キヤノン株式会社 Document management system, document management apparatus, document management method, and program
CN105516820A (en) * 2015-12-10 2016-04-20 腾讯科技(深圳)有限公司 Barrage interaction method and device
CN105516821B (en) * 2015-12-14 2019-07-30 广州弹幕网络科技有限公司 The method and device of barrage screening
CN105872781B (en) * 2016-05-31 2019-07-09 武汉斗鱼网络科技有限公司 A kind of barrage control speech filtering control method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170251240A1 (en) * 2015-01-20 2017-08-31 Tencent Technology (Shenzhen) Company Limited Bullet screen information processing method, client, service platform and storage medium
CN104811816A (en) * 2015-04-29 2015-07-29 北京奇艺世纪科技有限公司 Video image object bullet screen marking method, device and system
CN105357586A (en) * 2015-09-28 2016-02-24 北京奇艺世纪科技有限公司 Video bullet screen filtering method and device
CN108347640A (en) * 2017-01-22 2018-07-31 北京康得新创科技股份有限公司 Information processing method based on video and device
CN107222790A (en) * 2017-05-22 2017-09-29 深圳市金立通信设备有限公司 A kind of method, terminal and computer-readable recording medium for sending barrage
CN107613392A (en) * 2017-09-22 2018-01-19 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN107645686A (en) * 2017-09-22 2018-01-30 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN109819280A (en) * 2017-11-22 2019-05-28 上海全土豆文化传播有限公司 Barrage methods of exhibiting and device
CN108259968A (en) * 2017-12-13 2018-07-06 华为技术有限公司 Processing method, system and the relevant device of video barrage
CN108495168A (en) * 2018-03-06 2018-09-04 优酷网络技术(北京)有限公司 The display methods and device of barrage information
CN108632658A (en) * 2018-03-14 2018-10-09 维沃移动通信有限公司 A kind of barrage display methods, terminal
CN109739990A (en) * 2019-01-04 2019-05-10 北京七鑫易维信息技术有限公司 Information processing method and terminal
CN110139134A (en) * 2019-05-10 2019-08-16 韶关市启之信息技术有限公司 A kind of personalization barrage intelligently pushing method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115065874A (en) * 2022-06-20 2022-09-16 维沃移动通信有限公司 Video playing method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2021073478A1 (en) 2021-04-22
CN112689201B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN110381371B (en) Video editing method and electronic equipment
CN112689201B (en) Barrage information identification method, barrage information display method, server and electronic equipment
CN108632658B (en) Bullet screen display method and terminal
CN110798397B (en) File sending method and device and electronic equipment
CN108415652B (en) Text processing method and mobile terminal
CN108494665B (en) Group message display method and mobile terminal
CN111143015B (en) Screen capturing method and electronic equipment
CN109561211B (en) Information display method and mobile terminal
CN111274777B (en) Thinking guide display method and electronic equipment
CN108616448B (en) Information sharing path recommendation method and mobile terminal
CN107943390B (en) Character copying method and mobile terminal
CN110968228B (en) Display method of application program icon and electronic equipment
CN110618969B (en) Icon display method and electronic equipment
CN109495616B (en) Photographing method and terminal equipment
CN110544287B (en) Picture allocation processing method and electronic equipment
CN111614544B (en) Message processing method and electronic equipment
CN109753202B (en) Screen capturing method and mobile terminal
CN107728877B (en) Application recommendation method and mobile terminal
CN111125307A (en) Chat record query method and electronic equipment
CN110688497A (en) Resource information searching method and device, terminal equipment and storage medium
CN111143614A (en) Video display method and electronic equipment
CN108628534B (en) Character display method and mobile terminal
CN111190515A (en) Shortcut panel operation method, device and readable storage medium
CN110908751B (en) Information display and collection method and device, electronic equipment and medium
CN110032320B (en) Page rolling control method and device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant