CN113497946B - Video processing method, device, electronic equipment and storage medium - Google Patents

Video processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113497946B
CN113497946B CN202010203456.3A CN202010203456A CN113497946B CN 113497946 B CN113497946 B CN 113497946B CN 202010203456 A CN202010203456 A CN 202010203456A CN 113497946 B CN113497946 B CN 113497946B
Authority
CN
China
Prior art keywords
video
player
highlight
competition
game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010203456.3A
Other languages
Chinese (zh)
Other versions
CN113497946A (en
Inventor
余自强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010203456.3A priority Critical patent/CN113497946B/en
Publication of CN113497946A publication Critical patent/CN113497946A/en
Application granted granted Critical
Publication of CN113497946B publication Critical patent/CN113497946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a video processing method, a video processing device, electronic equipment and a storage medium; the embodiment of the invention can acquire the comment information of the competition video and the audience aiming at the competition video; carrying out audience emotion analysis based on comment information to obtain audience emotion indexes of the audience aiming at the competition video; performing game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video; capturing highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes; and sending the highlight to the client so that the target terminal plays the highlight. In the embodiment of the invention, the highlight segments appearing in the competition video can be determined according to the comment reaction of the audience and the picture content of the competition video. The invention can accurately and automatically intercept and send the highlight of the competition video to the client, and is convenient for the user to watch, thereby improving the video processing efficiency.

Description

Video processing method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a video processing method, apparatus, electronic device, and storage medium.
Background
In each video platform, the video of the electronic game is numerous, the content is various, the user base is huge, particularly, when the game is live, the spectators can watch the same live image at the same time, but after the highlight operation of the game player appears in the live image, the spectators can not review the highlight fragments by themselves, and can only wait for the game to guide and play back a highlight fragment manually.
In the game recorded video, the recorded video marks the highlight by detecting the modes of killing, attacking and the like in the game, and the user can play back by himself in a mode of dragging a progress bar, but the highlight obtained by the detection mode is not accurate enough, only events of killing, attacking and the like in the game can be detected, and other highlight in the game video, such as highlight, main highlight and the like, cannot be detected. Thus, current video processing is inefficient.
Disclosure of Invention
The embodiment of the invention provides a video processing method, a video processing device, electronic equipment and a storage medium, which can improve the video processing efficiency.
The embodiment of the invention provides a video processing method, which is suitable for a server and comprises the following steps:
Obtaining a competition video and comment information of spectators aiming at the competition video, wherein a video picture of the competition video comprises player characters;
Performing audience emotion analysis based on the comment information to obtain an audience emotion index of the audience aiming at the match video;
performing game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video;
capturing highlight clips in the match video according to the audience emotion indexes and the player interaction indexes;
and sending the highlight to a client so that the target terminal plays the highlight.
The embodiment of the invention also provides a video processing method, which is suitable for the client and comprises the following steps:
Obtaining a competition video;
displaying a video playing page, and playing a competition video on the video playing page;
when the highlight information is received from the server, displaying a playback prompt area in the video play page, wherein the playback prompt area can comprise the highlight information;
when detecting playback operation of a user aiming at a playback prompt area, sending a playback request to a server;
and when the highlight sent by the server is received, playing the highlight on the visual playing page.
In some embodiments, the video playing page includes a playing control and a pause control, and the displaying the video playing page and playing the game video on the video playing page includes:
when receiving the highlight sent by the server, displaying a play control on the visual play page;
when the play operation of the user for the play control is detected, playing the play highlight;
When a pause operation of the user for the pause control is detected, playing of the play highlight is paused.
In some embodiments, the video playing page includes a speed control, the displaying the video playing page and playing the game video on the video playing page includes:
when the slow-release operation of a user on the speed control is detected, performing time extension processing on the highlight to obtain a slowly-released highlight, and playing the slowly-released highlight on a video playing page;
when the fast play operation of the user for the speed control is detected, the time of the highlight is shortened, the fast played highlight is obtained, and the fast played highlight is played on the video playing page.
In some embodiments, the video playing page includes a sharing control, and the displaying the video playing page and playing the game video on the video playing page includes:
And when the sharing operation of the user aiming at the sharing control is detected, sharing the highlight according to the sharing operation.
The embodiment of the invention also provides a video processing device, which is suitable for the server and comprises:
The system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a competition video and comment information of spectators aiming at the competition video, and a video picture of the competition video comprises player characters;
The emotion unit is used for carrying out emotion analysis of the audience based on the evaluation information to obtain an emotion index of the audience aiming at the match video;
The interaction unit is used for carrying out competition interaction analysis based on the competition video to obtain player interaction indexes among player characters in the competition video;
The intercepting unit is used for intercepting highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes;
and the sending unit is used for sending the highlight to the client so that the target terminal plays the highlight.
In some embodiments, the interaction unit includes:
the identifier subunit is used for identifying the identifier of the competition video, identifying the player identifier corresponding to the player character in the competition video, and determining the position information corresponding to the player identifier;
The camping subunit is used for determining the role camping type of the player role according to the player identifier;
An hostile distance subunit, configured to calculate hostile distances between player characters belonging to different character camping types based on the location information;
And the interaction index subunit is used for determining the player interaction index between player characters in the competition video according to the hostile distance.
In some embodiments, the identifier subunit is configured to:
Determining a region to be compared in a video picture of the competition video, wherein the region to be compared is obtained by sliding a preset player identifier image on the video picture of the competition video;
Performing similarity calculation according to the region to be compared and the preset player identifier image to obtain image similarity between the preset player identifier image and the region to be compared;
Determining a player identifier in a video picture of the competition video according to the image similarity;
and determining the position of the player identifier in the video picture of the competition video to obtain the position information corresponding to the player identifier.
In some embodiments, the camping subunit is configured to:
Performing color analysis on the player identifier to determine color information of the player identifier;
and determining the role lineup type of the player role corresponding to the player identifier according to the color information.
In some embodiments, the interaction index subunit is configured to:
when the hostile distance is smaller than a preset hostile distance threshold, determining the video picture size of the competition video;
calculating the density of the hostile players among the player characters in the competition video according to the hostile distance and the video picture size;
and determining player interaction indexes among player characters in the competition video according to the hostile player density.
In some embodiments, the interaction unit includes:
the player identification subunit is used for carrying out player character identification on the competition video by adopting an image identification model to obtain the position information of the player character in the competition video and the character array type of the player character;
An hostile distance subunit, configured to calculate hostile distances between player characters belonging to different character camping types based on the location information;
And the interaction index subunit is used for determining the player interaction index between player characters in the competition video according to the hostile distance.
In some embodiments, the player identification subunit is configured to:
Acquiring a training image, wherein the training image comprises player characters marked with position information and character lineup types;
Performing model training on a preset model by adopting the training image until the preset model converges to obtain an image recognition model;
extracting image features of the competition video by adopting the image recognition model, and determining pixel types of image pixels in the competition video according to the image features;
and determining a player character in the competition video according to the pixel type of the image pixels in the competition video, and determining the position information of the player character and the character array type of the player character.
In some embodiments, the comment information includes a plurality of comment sentences, the emotion unit being configured to:
performing word segmentation processing on the comment sentences to obtain comment sentence word segmentation;
determining the emotion position of the comment sentence segmentation in a preset dictionary;
determining the emotion state of the comment sentence segmentation according to the emotion position;
Counting the emotion states of the comment sentence segmentation in the comment sentences to obtain the emotion states of the comment sentences;
And determining the audience emotion indexes of the audience aiming at the competition video according to the emotion states of the comment sentences.
In some embodiments, the intercepting unit comprises:
The precision chroma subunit is used for carrying out weighted summation on the audience emotion indexes and the player interaction indexes, and calculating to obtain precision chroma;
and the intercepting subunit is used for intercepting the highlight in the competition video according to the highlight degree.
In some embodiments, the game video includes a plurality of video clips, a truncating subunit for:
Determining a video segment with the highlight higher than a preset highlight threshold value in the match video as a segment to be intercepted;
Carrying out index change trend analysis according to the player interaction index corresponding to the to-be-intercepted segment to obtain a player interaction index change trend of the to-be-intercepted segment;
and determining the highlight fragment in the fragment to be intercepted according to the change trend of the player interaction index of the fragment to be intercepted.
In some embodiments, the sending unit is configured to:
generating highlight piece information according to the highlight piece;
sending the highlight information to a client so that the client displays a playback prompt area when playing the game video in a view playing page, wherein the playback prompt area comprises the highlight information;
And when receiving a playback request sent by the client, sending a highlight corresponding to the highlight information to the client so that the client plays the highlight on the visual play page.
The embodiment of the invention also provides a video processing device, which is suitable for the client, and comprises:
the video acquisition unit can be used for acquiring game videos;
the display unit can be used for displaying a video playing page and playing a competition video on the video playing page;
A playback unit operable to display a playback hint area in the video play page when the highlight information is received from the server, the playback hint area may include the highlight information;
The detection unit can be used for sending a playback request to the server when detecting the playback operation of the user for the playback prompt area;
And the playing unit can be used for playing the highlight clips on the visual playing page when the highlight clips sent by the server side are received.
In some embodiments, the playback unit may be configured to:
when receiving the highlight sent by the server, displaying a play control on the visual play page;
when the play operation of the user for the play control is detected, playing the play highlight;
when a pause operation of the user for the play control is detected, the play of the play highlight is paused.
In some embodiments, the playback unit may be configured to:
when the slow-release operation of a user on the speed control is detected, performing time extension processing on the highlight to obtain a slowly-released highlight, and playing the slowly-released highlight on a video playing page;
when the fast play operation of the user for the speed control is detected, the time of the highlight is shortened, the fast played highlight is obtained, and the fast played highlight is played on the video playing page.
In some embodiments, the playback unit may be configured to:
and when the user is detected to control the sharing operation aiming at the speed, sharing the highlight according to the sharing operation.
The embodiment of the invention also provides electronic equipment, which comprises a memory, wherein the memory stores a plurality of instructions; the processor loads instructions from the memory to perform steps in any of the video processing methods provided by the embodiments of the present invention.
The embodiment of the invention also provides a computer readable storage medium, which stores a plurality of instructions adapted to be loaded by a processor to perform the steps in any video processing method provided by the embodiment of the invention.
The embodiment of the invention also provides a video processing system, which comprises a server side and a client side, wherein:
The server side can acquire the competition video and comment information of spectators aiming at the competition video, and a video picture of the competition video can comprise player characters; carrying out audience emotion analysis based on comment information to obtain audience emotion indexes of the audience aiming at the competition video; performing game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video; capturing highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes; generating highlight information according to the highlight; sending highlight information to the client so that the client displays a playback prompt area when playing the game video in the view play page, wherein the playback prompt area can comprise the highlight information; and when receiving a playback request sent by the client, sending a highlight corresponding to the highlight information to the client so that the client plays the highlight on a visual play page.
The client can acquire the competition video; displaying a video playing page, and playing a competition video on the video playing page; when the highlight information is received from the server, displaying a playback prompt area in the video play page, wherein the playback prompt area can comprise the highlight information; when detecting playback operation of a user aiming at a playback prompt area, sending a playback request to a server; and when the highlight sent by the server is received, playing the highlight on the visual playing page.
According to the embodiment of the invention, the competition video and comment information of spectators aiming at the competition video can be obtained, and player characters can be included in video pictures of the competition video; carrying out audience emotion analysis based on comment information to obtain audience emotion indexes of the audience aiming at the competition video; performing game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video; capturing highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes; and sending the highlight to the client so that the target terminal plays the highlight.
In the present invention, a highlight can be automatically detected in a game video by analyzing comment information of a spectator for the game video, and player characters in the game video. Particularly in the field of game live broadcast, the method and the device can judge whether the live broadcast picture is wonderful according to the player interaction index and the audience emotion index of the audience live broadcast barrage by calculating the player interaction index between player characters in the game picture in the game live broadcast process, and can send the wonderful segments in the game video to a client side for the audience to watch, so that the video processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1a is a schematic view of a video processing method according to an embodiment of the present invention;
fig. 1b is a schematic flow chart of a video processing method according to an embodiment of the present invention;
FIG. 1c is a schematic diagram of a viewer emotion index over time for a video processing method according to an embodiment of the present invention;
FIG. 1d is a schematic view of a player identifier of a video processing method according to an embodiment of the present invention;
FIG. 1e is a schematic diagram showing a player interaction index of a video processing method according to an embodiment of the present invention;
FIG. 1f is a schematic diagram showing the change of the video processing method according to the embodiment of the present invention;
fig. 2a is a schematic diagram of a video processing method applied to a video playing page according to an embodiment of the present invention;
fig. 2b is a schematic diagram of a second flow of a video processing method according to an embodiment of the present invention;
FIG. 3a is a schematic flow chart of a video processing system according to an embodiment of the present invention;
Fig. 3b is a schematic diagram of video frame gray scale of a game video of the video processing method according to the embodiment of the present invention;
fig. 3c is a schematic diagram of matching degree of a video processing method according to an embodiment of the present invention;
fig. 3d is a schematic diagram of a matching degree after thresholding in the video processing method according to the embodiment of the present invention;
fig. 4 is a schematic diagram of a first configuration of a video processing apparatus according to an embodiment of the present invention;
Fig. 5 is a schematic diagram of a second structure of a video processing apparatus according to an embodiment of the present invention;
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The embodiment of the invention provides a video processing method, a video processing device, electronic equipment and a storage medium.
The video processing device may be integrated in an electronic device, which may be a terminal or a server. The terminal can be a smart phone, a Bluetooth headset, a tablet personal computer and the like. The server may be a single server or a server cluster composed of a plurality of servers.
In some embodiments, the video processing apparatus may also be integrated in a plurality of electronic devices, for example, the video processing apparatus may be integrated in a plurality of servers, and the video processing method of the present invention is implemented by the plurality of servers.
In some embodiments, the server may also be implemented in the form of a terminal.
For example, referring to fig. 1a, the electronic device may be a server, and the server may obtain the game video and comment information of the audience for the game video from a live broadcast server, where a video frame of the game video may include a player character; carrying out audience emotion analysis based on comment information to obtain audience emotion indexes of the audience aiming at the competition video; performing game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video; capturing highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes; and sending the highlight to the client so that the target terminal plays the highlight.
The following will describe in detail. The numbers of the following examples are not intended to limit the preferred order of the examples.
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is a technology that utilizes a digital computer to simulate the human perception environment, acquire knowledge, and use knowledge, which can enable machines to function similar to human perception, reasoning, and decision. The artificial intelligence technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning, deep learning and other directions.
Among them, computer Vision (CV) is a technique of performing operations such as recognition and measurement of a target image by using a Computer instead of human eyes and further performing processing. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, virtual reality, augmented reality, synchronous positioning, and map construction techniques, such as image processing techniques for image rendering, image tracing extraction, and the like.
The natural language processing (Natural Language Processing, NLP) is a technology that a computer replaces a human brain to perform operations such as understanding, replying, writing and the like on target characters and further performs processing. Natural language processing techniques typically include text classification, text generation, information retrieval, question-answering systems, etc., such as speech recognition, AI question-answering, poetry generation, machine translation, etc.
In this embodiment, a video processing method integrating computer vision and natural language processing technologies is provided, where the video processing method is applicable to a server, as shown in fig. 1b, a specific flow of the video processing method may be as follows:
101. and acquiring the competition video and comment information of the audience aiming at the competition video, wherein player characters can be included in video pictures of the competition video.
In this embodiment, the match video may refer to a match video in the field of competitive athletic, such as an electronic game athletic match video, a basketball match video, a robot football match video, and so forth. The game video may be expressed in various forms, for example, the game video may be expressed in the form of recorded video, may be expressed in the form of live video, may be expressed in the form of moving pictures, and the like.
In the competition in the field of antagonistic competition, players can play the same table for the battle team, the potential force and the like to which the players belong, for example, in the basketball competition, two basketball teams can play the same table, and each team can play five teams; for example, in a large fleeing type of electronic game competition, there may be 20 teams for the same competition, each team may have 4 players in the field; for example, in an instant strategy game, a one-to-one match may be made by two players, and so on.
A player character may refer to a virtual player character or may refer to a real player, e.g., in a real basketball game, the player character may include a front, a center, a back guard, etc.; for example, in a virtual electronic game competition, player characters may include warriors, beasts, shooters, and the like.
When the audience watches the match video, comments on the video can be released in various modes, for example, the audience can release comments on the video in modes of a barrage (danmaku, a comment subtitle popped up on the network video when the network video is played), comment area comments, praise, collection, forwarding and the like, so that comment information of the audience aiming at the match video can comprise praise, collection, forwarding and the like, and can also comprise information of a barrage, comment area comments and the like expressed in characters, figures, characters and the like.
Methods for acquiring the game video and comment information of the audience for the game video are various, for example, the game video is acquired from a video database through a network, and the comment information of the audience for the game video is acquired from a user database; for another example, the game video is read in the local memory, and comment information of the audience for the game video is obtained from the client through the network, and the like.
102. And carrying out audience emotion analysis based on the comment information to obtain audience emotion indexes of the audience aiming at the competition video.
The comment information of the spectator may sometimes reflect more precisely the highlight of the video that is difficult to describe than the video frame of the game video, for example, in the game video of an electronic game, when a large number of network words such as "66666" appear in the spectator barrage, it may be due to the highlight of the player appearing in the game video, when a large number of network words such as "233" appear in the spectator barrage, it may be due to the smiling lens appearing in the game video, etc.
Because of the characteristics of the comment information, the comment information can be used for capturing the emotion generated subjectively by the audience, so that the comment information is used as information fed back by the audience, the emotion index of the audience for the video of the match at the moment can be calculated, and the emotion index of the audience can be used as a factor for measuring whether the highlight appears in the video of the match.
In step 102, natural language processing techniques may be used to perform audience emotion analysis based on the comment information to obtain an audience emotion index for the game video.
For example, natural language processing techniques may be used to perform viewer emotion analysis based on emotion dictionary methods, deep learning based methods may be used to perform viewer emotion analysis, and so on.
For example, in some embodiments, the step of using an emotion dictionary-based method to perform audience emotion analysis based on comment information to obtain an audience emotion index for a game video is as follows:
a. performing word segmentation processing on the comment sentences to obtain comment sentence word segmentation;
b. determining the emotion position of the comment sentence segmentation in a preset dictionary;
c. Determining the emotion state of comment sentence segmentation according to the emotion position;
d. counting the emotion states of the comment sentence segmentation in the comment sentences to obtain the emotion states of the comment sentences;
e. And determining the audience emotion indexes of the audience aiming at the competition video according to the emotion states of the comment sentences.
The word segmentation process is a process of splitting and recombining continuous word sequences, namely comment sentences, into word sequences according to a certain specification.
The preset dictionary is a dictionary of audience comments under the game video in a specific field, for example, in the field of game live broadcast, the game live broadcast dictionary can be used as the preset dictionary, and the game live broadcast dictionary can include a large number of words such as network words, words composed of repeated single words, expressions composed of characters and the like, for example, 66666, 23333, ???, 77777, nb, etc.
Each word in the preset dictionary is mapped with the emotion position of the word. For example, in a live game dictionary, the emotion position of the word "66666" is [10, 22], indicating that the word is within the emotion state range of "operation severity"; the emotion location of the word "7777" is [ -2, 11], indicating that the word is within the emotion state of "fun", and so on.
The pre-set dictionary may be imported by the technician, may be obtained from a dictionary database, etc., and the source thereof is not limited.
Emotional states may include a variety of states, such as a surprise state, a fun state, an anger state, a anecdotal state, and so forth.
In some embodiments, after the step of determining the emotion state of the comment sentence segmentation according to the emotion position, the degree adverbs, the negatives, and the like in the comment sentence segmentation may also be processed.
Wherein, the degree adverbs are changed in terms of the degree attribute of the modifier; the negation word can make the emotion tendencies of the whole sentence change greatly.
Therefore, a detection window can be arranged for the degree adverbs, the negatives and the like, if the degree adverbs, the negatives and the like are found in the detection window, the influence coefficients of the degree adverbs, the negatives and the like on the emotion tendencies can be determined, and finally the emotion states of the comment sentence word segmentation in the comment sentence are counted through the influence coefficients, so that the emotion states of the comment sentence are obtained.
In some embodiments, the influence of certain specific words on the emotion states is larger, so when the emotion states of the comment sentence partitionings in the comment sentences are counted to obtain the emotion states of the comment sentences, the emotion states of certain specific comment sentence partitionings can be weighted, and finally the emotion states of all comment sentence partitionings in the comment sentences are summed to obtain the emotion states of the comment sentences.
And finally, summing according to the emotion states of each comment sentence, and calculating to obtain the audience emotion index of the audience for the competition video, wherein the audience emotion index is a numerical value and is used for expressing the emotion of the audience for a video picture at a certain moment in the competition video.
For example, referring to fig. 1c, fig. 1c is a graph of the change in the index of the viewer's emotion when the viewer views a game video at 13:40 starts playing, 23:20 is the latest play time, and the viewer can be obtained at 13 according to step 102: 40-23: viewer emotion index when viewing the game video for 20 hours.
103. And carrying out competition interaction analysis based on the competition video to obtain player interaction indexes among player characters in the competition video.
In the game video with the resistant content, the player characters can perform resistant player interaction, for example, in the real basketball game video, the player characters can perform player interaction such as passing, interception and the like; in the virtual video of the electronic game competition, player interactions such as killing, burial and the like can be performed among player characters.
Because the closer the distance between the player characters is, the more likely the player interaction is happened, the video images of the racing video can be compared for race interaction analysis, and the information such as the position, the battle and the like of the player characters is determined in the video images so as to calculate the player interaction index between the player characters, wherein the player interaction index can be used as a factor for measuring whether the highlight appears in the racing video.
In step 103, the game interaction analysis may be performed based on the game video by using a computer vision technology to obtain a player interaction index between player characters in the game video, or the game interaction analysis may be performed based on the game video by using a common image processing technology to obtain a player interaction index between player characters in the game video.
For example, in some embodiments, a common image processing technique is used to perform game interaction analysis based on game video, and the steps of obtaining a player interaction index between player characters in the game video are as follows:
(1) Identifying the identifier of the game video, identifying the player identifier corresponding to the player character in the game video, and determining the position information corresponding to the player identifier;
(2) Determining the role lineup type of the player role according to the player identifier;
(3) Based on the position information, calculating hostile distances between player characters belonging to different character camping types;
(4) And determining player interaction indexes among player characters in the competition video according to the hostile distance.
Where a player identifier refers to an identifier specific to a player character in the game video, such as basketball uniform for a player in a basketball game video, blood bars at the top of a player character in an electronic game video, a character outline of a player character in a shooting-type game video, and so forth.
In particular, in the field of electronic games, visual elements that direct the attention of the player are often constructed in the game frame, and these visual elements are often of a fixed shape, color, and style.
For example, referring to fig. 1d, fig. 1d is a schematic diagram of a player identifier in video of an electronic game competition, wherein the blood bars on the tops of player characters controlled by players are used as player identifiers, in the game, a blood bar with a fixed shape is always present on the tops of player characters controlled by players, and the appearance outlines of the blood bars of all player characters in the game are fixed and identical, so that the blood bars can be used as player identifiers to match the blood bars of the player characters in an image, thereby determining the positions of the player characters.
It should be noted that in some embodiments, various game content information, such as blood volume, color, cooling time, gain status, etc., may be included in the blood stripe of the player character, and these content may be changed continuously, so that only the image of the front part of the blood stripe as shown in fig. 1d is taken as a preset player identifier, thereby ensuring that only the feature of the front part of the blood stripe is detected in the game screen, and ignoring the interference of other game content information in the blood stripe.
Wherein, the role array type of the player role can be determined according to the player identifier, for example, in basketball match, which team the player belongs to can be judged according to the color of the basketball uniform of the player; for example, in an electronic game tournament, whether a player character belongs to a friend matrix or an enemy matrix may be determined based on the color of blood bars of the player character, and so forth.
In some embodiments, the step of identifying a player identifier corresponding to a player character in a match video and determining location information corresponding to the player identifier may be performed using a template matching function in a cross-platform computer vision library, e.g., comprising the steps of:
a. Determining a region to be compared in a video picture of the competition video, wherein the region to be compared is obtained by sliding a preset player identifier image on the video picture of the competition video;
b. performing similarity calculation according to the region to be compared and the preset player identifier image to obtain image similarity between the preset player identifier image and the region to be compared;
c. determining a player identifier in a video picture of the competition video according to the image similarity;
d. and determining the position of the player identifier in the video picture of the competition video to obtain the position information corresponding to the player identifier.
For example, the step of "performing identifier identification on the match video, identifying a player identifier corresponding to a player character in the match video, and determining location information corresponding to the player identifier" may be performed by performing matching using a standard correlation matching (tm_ccoff_ NORMED) mode of the OpenCV template matching function MATCHTEMPLATE.
In some embodiments, the step of "determining the character lineup type to which the player character belongs from the player identifier" may include the steps of:
a. performing color analysis on the player identifier to determine color information of the player identifier;
b. and determining the role array type of the player role corresponding to the player identifier according to the color information.
Wherein the color information includes color average value information, color feature values, color shading information, color distribution information, and the like.
For example, in the video of the electronic game match, the blood stripe color of the player character of the friend matrix is blue, and the blood stripe color of the player character of the enemy matrix is red, so that the color of the player identifier can be averaged to obtain a color average value, and the character matrix type of the player character corresponding to the player identifier is determined according to the color average value.
In some embodiments, since the denser the player positions in a frame, the higher the likelihood that the frame is a highlight frame, the player interaction index between player characters in the game video may be determined according to the density of player characters in the game video, and the step of determining the player interaction index between player characters in the game video according to hostile distances may include the steps of:
a. when the hostile distance is smaller than a preset hostile distance threshold, determining the video picture size of the game video;
b. Calculating the density of the hostile players among the player characters in the competition video according to the hostile distance and the video picture size;
c. And determining the player interaction index between player characters in the competition video according to the density of the hostile player.
For example, taking only two kinds of camps of red and blue in a game as an example, L red represents all blood strip positions of the red camps, L blue represents all blood strip positions of the blue camps, and when the blood strip positions of different camps are smaller than a preset hostile distance threshold T, the hostile player density D between player characters can be calculated through hostile distances, so as to determine a player interaction index I between player characters in a competition video, wherein the specific formula is as follows:
where w, h represent the width and height of the video frame of the game video and d represents the Euclidean distance, i.e. hostile distance, between L red and L blue.
Referring to FIG. 1e, FIG. 1e is a graphical representation of player interaction index over time in a game video frame, the game video frame being 13:40 starts playing, 23:20 is the latest playing time, and according to step 103, the picture of the match video can be obtained in 13: 40-23: player interaction index over a 20-time period.
In some embodiments, the step of using computer vision techniques to analyze game interactions based on game video to obtain a player interaction index between player characters in the game video is as follows:
(1) Performing player character recognition on the competition video by adopting the image recognition model to obtain position information of player characters in the competition video and character array types to which the player characters belong;
(2) Based on the position information, calculating hostile distances between player characters belonging to different character camping types;
(3) And determining player interaction indexes among player characters in the competition video according to the hostile distance.
The image recognition model may be any image recognition model based on an artificial neural network, for example, an artificial neural network model such as AlexNet, VGG, resNet, inception, denseNet.
In some embodiments, the step of performing player character recognition on the competition video by using the image recognition model to obtain position information of a player character in the competition video and a character camping type to which the player character belongs may include the following steps:
a. acquiring a training image, wherein the training image can comprise player characters marked with position information and character lineup types;
b. training the preset model by adopting a training image until the preset model converges to obtain an image recognition model;
c. Extracting image features of the competition video by adopting an image recognition model, and determining pixel types of image pixels in the competition video according to the image features;
d. And determining the player character in the competition video according to the pixel type of the image pixels in the competition video, and determining the position information of the player character and the character array type to which the player character belongs.
The training image acquisition method includes various methods, for example, a technician can intercept the training image from the previous competition video and mark the position of the player character in the training image and the character array type of the player character.
104. And capturing highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes.
When a highlight picture appears in the competition video, the distance between the mutually hostile player characters in the video picture is often close, and positive emotions such as surprise, fun, exclamation and the like are often shown in comment information of spectators.
The player interaction index and the audience emotion index may be used to evaluate whether the pictures appearing in the game video are highlight.
In some embodiments, considering that the distances between the hostile player characters and the comment information of the spectators are different as the judgment criteria, the step 104 may include the following steps:
(1) Weighting and summing the audience emotion indexes and the player interaction indexes, and calculating to obtain the precision;
(2) And cutting out the highlight in the game video according to the precision.
For example, the calculation formula of the chroma S is as follows:
S=I+WE
wherein S represents the precision, I represents the player interaction index, E represents the audience emotion index, W represents the emotion score weight, and W can be set by a technician.
Referring to fig. 1c and 1e, fig. 1c is a schematic diagram of the change of the emotion index of the audience with time, and fig. 1e is a schematic diagram of the change of the interaction index of the player with time, so that the change of the precision with time can be calculated according to the calculation formulas of fig. 1c and 1e and the precision S.
Referring to fig. 1f, fig. 1f is a schematic diagram of the change in the level of highlights over time in a video frame of a game, the game video being at 13:40 starts playing, 23:20 is the latest play time, and according to step 104, the picture of the match video can be obtained in 13: 40-23: fine chroma over a period of 20.
In some embodiments, the step of "capturing a highlight in a game video based on the level of precision" may comprise the steps of:
a. Determining a video segment with the highlight higher than a preset highlight threshold value in the competition video as a segment to be intercepted;
b. Carrying out index change trend analysis according to player interaction indexes corresponding to the to-be-intercepted fragments to obtain player interaction index change trends of the to-be-intercepted fragments;
c. and determining the highlight fragment in the fragment to be intercepted according to the change trend of the player interaction index of the fragment to be intercepted.
For example, referring to FIG. 1f, taking the segment of FIG. 1f having a highlight level greater than a threshold 500, one can determine 21: about 20 is a section to be cut, and a highlight is determined in the section to be cut according to the change trend of the player interaction index of the section to be cut (for example, the sudden rise of the player interaction index indicates the beginning of a highlight, and the sudden rise of the player interaction index indicates the end of the highlight).
105. And sending the highlight to the client so that the target terminal plays the highlight.
In the scheme, the highlight can be directly sent to the client after being intercepted so as to be played by the target terminal, and the related information of the highlight can be reduced before the highlight is sent to the client so as to be convenient for the client to decide whether to watch the highlight or not, thereby reducing the consumption of flow.
In some embodiments, step 105 may include the steps of:
generating highlight information according to the highlight;
Sending highlight information to the client so that the client displays a playback prompt area when playing the game video in the view play page, wherein the playback prompt area can comprise the highlight information;
And when receiving a playback request sent by the client, sending a highlight corresponding to the highlight information to the client so that the client plays the highlight on a visual play page.
The highlight information may be displayed in the form of text, image, audio, short video, etc., for example, the highlight information may be a first frame image of the highlight, the highlight information may also be audio of the highlight information, etc.
As can be seen from the above, the embodiment of the present invention can obtain the competition video and comment information of the audience for the competition video, and the video frame of the competition video can include player characters; carrying out audience emotion analysis based on comment information to obtain audience emotion indexes of the audience aiming at the competition video; performing game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video; capturing highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes; and sending the highlight to the client so that the target terminal plays the highlight.
According to the scheme, interaction among player characters in the video picture can be automatically analyzed, emotion in comment information of audiences is analyzed, so that whether a highlight appears in the competition video can be accurately judged by combining the player interaction and the emotion of the audiences, and highlight picture fragments can be intercepted and sent to a client side so that the highlight fragments can be played by the client side. Therefore, the video processing efficiency is improved by the scheme.
The embodiment of the invention provides a video processing method and device suitable for a client, electronic equipment and a storage medium.
The video processing device may be integrated in an electronic device, which may be a terminal. The terminal can be a smart phone, a tablet personal computer, a smart Bluetooth device, a notebook computer, a personal computer (Personal Computer, PC) or the like.
For example, referring to fig. 2a, the electronic device may be a smart phone, which may acquire a game video; displaying a video playing page, and playing a competition video on the video playing page; when the highlight information is received from the server, displaying a playback prompt area in the video play page, wherein the playback prompt area can comprise the highlight information; when detecting playback operation of a user aiming at a playback prompt area, sending a playback request to a server; and when the highlight sent by the server is received, playing the highlight on the visual playing page.
The following will describe in detail. The numbers of the following examples are not intended to limit the preferred order of the examples. As shown in fig. 2b, the specific flow of the video processing method may be as follows:
201. And obtaining a competition video.
There are various methods for acquiring a game video, for example, a game video is acquired from a video database through a network; such as reading the game video in local memory, etc.
202. Displaying a video playing page, and playing the competition video on the video playing page.
Referring to fig. 2a, the game video may be played in a video playing page, and the video playing page may further include a pause control, a progress bar control, a barrage switch control, and so on, so that a user clicks these controls to implement functions of pause, skip play, barrage display, and so on, on the game video.
203. When highlight information is received from the server, a playback hint area is displayed in the video play page, which may include the highlight information.
Referring to fig. 2a, a display playback hint area may be included in a video play page, in which highlight information may be displayed.
204. And when detecting the playback operation of the user aiming at the playback prompt area, sending a playback request to the server.
The playback operation of the playback prompt area by the user may include clicking, touching, sliding, etc. of the playback prompt area by the user, and when the playback operation of the playback prompt area by the user is detected, a playback request may be sent to the server side.
When the server receives the playback request, the server may send the highlight to the client.
205. And when the highlight sent by the server is received, playing the highlight on the video playing page.
For example, referring to fig. 2a, when a highlight sent by a server is received, the playback prompt area may be stopped being displayed, and the game video may be stopped being played on a video playing page, and the video playing page may start playing the highlight.
In some embodiments, to enable the user to play and pause the highlight, thereby improving the user experience, the view play page may include a play control and a pause control, and in step 205 may include the following steps:
when receiving the highlight sent by the server, displaying a play control on the visual play page;
when the play operation of the user for the play control is detected, playing the play highlight;
When a pause operation of the user for the pause control is detected, playing of the play highlight is paused.
In some embodiments, to enable the user to slow play, fast play, etc. the highlight, thereby improving the user experience, the video playing page may include a speed control, and in step 205 may include the following steps:
when the slow-release operation of a user on the speed control is detected, performing time extension processing on the highlight to obtain a slowly-released highlight, and playing the slowly-released highlight on a video playing page;
when the fast play operation of the user for the speed control is detected, the time of the highlight is shortened, the fast played highlight is obtained, and the fast played highlight is played on the video playing page.
In some embodiments, in order to enable the user to perform operations such as sharing the highlight, so as to improve the user experience, the video playing page may include a sharing control, and in step 205 may include the following steps:
And when the sharing operation of the user aiming at the sharing control is detected, sharing the highlight according to the sharing operation.
From the above, the embodiment of the invention can obtain the competition video; displaying a video playing page, and playing a competition video on the video playing page; when the highlight information is received from the server, displaying a playback prompt area in the video play page, wherein the playback prompt area can comprise the highlight information; when detecting playback operation of a user aiming at a playback prompt area, sending a playback request to a server; and when the highlight sent by the server is received, playing the highlight on the visual playing page.
In the embodiment of the invention, the highlight of the competition video can be automatically and accurately acquired while the competition video is played. Therefore, the video processing efficiency is improved by the scheme.
The method described in the above embodiments will be described in further detail below.
In this embodiment, a method according to an embodiment of the present invention will be described in detail by taking live video game.
As shown in fig. 3a, a video processing system includes a client and a server, and the processing flow of the video processing system is as follows:
301. the client acquires the competition video, displays a video playing page and plays the competition video on the video playing page.
The client can acquire the live video of the game match from the video database through the network, display a video playing page and play the live video of the game match on the video playing page.
302. The client side also acquires the competition video and comment information of the audience aiming at the competition video, wherein the video picture of the competition video comprises player characters.
The client side can also acquire the live video of the game play and acquire the barrage of the audience aiming at the live video of the game play, and the video picture of the live video of the game play can comprise a player virtual character controlled by a game player.
For example, a barrage is exemplified as follows:
21:21:28,hahahahahahahahahaha
21:21:28,66666
21:21:28,???????????????
21:21:28,hahaha
21:21:28,hahahahahahahahahaha
21:21:28,????????
21:21:28,????????????
21:21:29,?????
21:21:29,???????????????
21:21:29,??????
303. And the server side performs audience emotion analysis based on the comment information to obtain audience emotion indexes of the audience aiming at the competition video.
The server side can analyze the emotions of the audiences on the barrages to obtain the emotion indexes of the audiences aiming at the match video.
304. The service end performs game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video.
In this embodiment, the standard correlation matching (tm_ccoff_ NORMED) mode of the OpenCV template matching function MATCHTEMPLATE may be used for the blood stripe matching, which includes the following steps:
firstly, the video frame of the game video needs to be converted into a gray image, and fig. 3b is a gray image of the video frame of the game video, wherein the part framed by the white frame in the figure is the position of the virtual character and the blood bar of the player in the video frame.
Then, the standard correlation matching (tm_ccoff_ NORMED) mode of the OpenCV template matching function MATCHTEMPLATE is used to perform the matching of the blood streaks, so as to generate a matching degree image, for example, referring to fig. 3c, each pixel value in the matching degree image represents the matching degree of the blood streaks with the original image at this point.
Then, the matching degree image is subjected to threshold processing, and only matching points larger than a preset matching threshold value are taken as position points of the player character, for example, 0.5 is taken as a threshold value in the process image subjected to threshold processing, as shown in fig. 3d, small white points in the image are position points of the player character, and white boxes are framed parts of positions of the player virtual character and blood bars in the original image.
In some embodiments, since there may be multiple location points at the position of each blood stripe after thresholding, merging the location points with close locations is also required, and one of the location points is taken as the location point of the player avatar in each region.
And finally, determining the game lineup of the virtual character of the player according to the blood color.
305. And the server intercepts the highlight in the game video according to the audience emotion index and the player interaction index, and generates highlight information according to the highlight.
306. The server side sends the highlight information to the client side.
307. When the client receives the highlight information from the server, a playback prompt area is displayed in the video play page, the playback prompt area including the highlight information.
308. And when the client detects the playback operation of the user for the playback prompt area, sending a playback request to the server.
309. And when the server receives the playback request sent by the client, sending the highlight corresponding to the highlight information to the client.
310. And when the client receives the highlight sent by the server, playing the highlight on the visual playing page.
As can be seen from the above, in the embodiment of the present invention, the client obtains the game video, displays the video playing page, and plays the game video on the video playing page; the client side also acquires the competition video and comment information of the audience aiming at the competition video, wherein a video picture of the competition video comprises player characters; the server side carries out audience emotion analysis based on the comment information to obtain audience emotion indexes of the audience aiming at the competition video; the service terminal performs game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video; the server intercepts highlight clips in the game video according to the audience emotion indexes and the player interaction indexes, and generates highlight clip information according to the highlight clips; the server side sends highlight information to the client side; when the client receives the highlight information from the server, displaying a playback prompt area in the video playing page, wherein the playback prompt area comprises the highlight information; when the client detects playback operation of a user aiming at a playback prompt area, a playback request is sent to the server; when the server receives a playback request sent by the client, sending a highlight corresponding to the highlight information to the client; and when the client receives the highlight sent by the server, playing the highlight on the visual playing page.
The invention can automatically and accurately determine the position of the virtual character of the player and the camping of the virtual character of the player in the live broadcast picture aiming at the game play live broadcast, thereby accurately judging whether the players in the current live broadcast picture are fighting or not; when the players fight and the audience is in a positive emotion state, the current live broadcast picture can be determined to be a highlight picture, so that the highlight clip in live broadcast of the game can be automatically and accurately intercepted, the user can watch the live broadcast, and the video playing page prompts the user to play back the highlight clip, and the user can independently select to play back or not play back the highlight clip. Therefore, the embodiment of the invention can improve the efficiency of video processing while ensuring the user experience.
In order to better implement the above method, the embodiment of the present invention further provides a video processing apparatus, which is suitable for a server, and the video processing apparatus may be specifically integrated in an electronic device, and the electronic device may be a server. The server may be a single server or a server cluster composed of a plurality of servers.
For example, in the present embodiment, a method according to an embodiment of the present invention will be described in detail by taking a specific integration of a video processing apparatus in a server as an example.
For example, as shown in fig. 4, the video processing apparatus may include an acquisition unit 401, an emotion unit 402, an interaction unit 403, an interception unit 404, and a transmission unit 405, as follows:
and (one) an acquisition unit 401.
The obtaining unit 401 may be configured to obtain a game video and comment information of a viewer about the game video, where a player character may be included in a video frame of the game video.
(II) emotion unit 402.
And the emotion unit 402 is used for performing emotion analysis of the audience based on the comment information to obtain an emotion index of the audience for the match video.
In some embodiments, comment information may include a plurality of comment sentences, emotion unit 402, which may be used to:
performing word segmentation processing on the comment sentences to obtain comment sentence word segmentation;
determining the emotion position of the comment sentence segmentation in a preset dictionary;
Determining the emotion state of comment sentence segmentation according to the emotion position;
counting the emotion states of the comment sentence segmentation in the comment sentences to obtain the emotion states of the comment sentences;
and determining the audience emotion indexes of the audience aiming at the competition video according to the emotion states of the comment sentences.
And (III) an interaction unit 403.
The interaction unit 403 may be configured to perform game interaction analysis based on the game video, so as to obtain a player interaction index between player characters in the game video.
In some embodiments, the interaction unit 403 may include:
The identifier subunit can be used for identifying the identifier of the game video, identifying the player identifier corresponding to the player character in the game video, and determining the position information corresponding to the player identifier;
The matrix subunit can be used for determining the character matrix type of the player character according to the player identifier;
The hostile distance subunit can be used for calculating hostile distances among player characters belonging to different character camping types based on the position information;
And the interaction index subunit can be used for determining the interaction index of the player between the player characters in the competition video according to the hostile distance.
In some embodiments, the identifier subunit may be configured to:
determining a region to be compared in a video picture of the competition video, wherein the region to be compared is obtained by sliding a preset player identifier image on the video picture of the competition video;
Performing similarity calculation according to the region to be compared and the preset player identifier image to obtain image similarity between the preset player identifier image and the region to be compared;
Determining a player identifier in a video picture of the competition video according to the image similarity;
And determining the position of the player identifier in the video picture of the competition video to obtain the position information corresponding to the player identifier.
In some embodiments, the camping subunit may be configured to:
Performing color analysis on the player identifier to determine color information of the player identifier;
and determining the role array type of the player role corresponding to the player identifier according to the color information.
In some embodiments, the interaction index subunit may be configured to:
When the hostile distance is smaller than a preset hostile distance threshold, determining the video picture size of the game video;
calculating the density of the hostile players among the player characters in the competition video according to the hostile distance and the video picture size;
and determining the player interaction index between player characters in the competition video according to the density of the hostile player.
In some embodiments, the interaction unit 403 may include:
The player identification subunit can be used for carrying out player character identification on the competition video by adopting the image identification model to obtain the position information of the player characters in the competition video and the character array type of the player characters;
The hostile distance subunit can be used for calculating hostile distances among player characters belonging to different character camping types based on the position information;
And the interaction index subunit can be used for determining the interaction index of the player between the player characters in the competition video according to the hostile distance.
In some embodiments, the player identification subunit may be configured to:
Acquiring a training image, wherein the training image can comprise player characters marked with position information and character lineup types;
Training the preset model by adopting a training image until the preset model converges to obtain an image recognition model;
Extracting image features of the competition video by adopting an image recognition model, and determining pixel types of image pixels in the competition video according to the image features;
And determining the player character in the competition video according to the pixel type of the image pixels in the competition video, and determining the position information of the player character and the character array type to which the player character belongs.
And (fourth) an interception unit 404.
And a capturing unit 404, configured to capture a highlight in the video of the match according to the audience emotion index and the player interaction index.
In some embodiments, intercept unit 404 may include:
The precision chroma subunit can be used for carrying out weighted summation on the audience emotion indexes and the player interaction indexes, and calculating to obtain precision chroma;
And the intercepting subunit can be used for intercepting the highlight in the competition video according to the precision.
In some embodiments, the game video may include a plurality of video clips, a truncating subunit, which may be configured to:
determining a video segment with the highlight higher than a preset highlight threshold value in the competition video as a segment to be intercepted;
carrying out index change trend analysis according to player interaction indexes corresponding to the to-be-intercepted fragments to obtain player interaction index change trends of the to-be-intercepted fragments;
and determining the highlight fragment in the fragment to be intercepted according to the change trend of the player interaction index of the fragment to be intercepted.
And (fifth) a transmitting unit 405.
The sending unit 405 may be configured to send the highlight reel to the client so that the target terminal plays the highlight reel.
In some embodiments, the sending unit 405 may be configured to:
generating highlight information according to the highlight;
Sending highlight information to the client so that the client displays a playback prompt area when playing the game video in the view play page, wherein the playback prompt area can comprise the highlight information;
And when receiving a playback request sent by the client, sending a highlight corresponding to the highlight information to the client so that the client plays the highlight on a visual play page.
In the implementation, each unit may be implemented as an independent entity, or may be implemented as the same entity or several entities in any combination, and the implementation of each unit may be referred to the foregoing method embodiment, which is not described herein again.
As can be seen from the above, in the video processing apparatus of this embodiment, the obtaining unit obtains the game video and comment information of the audience for the game video, and the video frame of the game video may include player characters; the emotion unit analyzes the emotion of the audience based on the comment information to obtain an emotion index of the audience aiming at the competition video; the interactive unit performs game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video; the capturing unit captures highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes; the highlight is transmitted to the client by the transmitting unit so that the target terminal plays the highlight.
Therefore, the embodiment of the invention can improve the efficiency of video processing.
In order to better implement the above method, the embodiment of the present invention further provides a video processing apparatus, which is suitable for a client, and the video processing apparatus may be specifically integrated in an electronic device, and the electronic device may be a terminal. The terminal can be a mobile phone, a tablet personal computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices
For example, in this embodiment, a method according to an embodiment of the present invention will be described in detail by taking a case where a video processing apparatus is specifically integrated in a smart phone as an example.
For example, as shown in fig. 5, the video processing apparatus may include a video acquisition unit 501, a display unit 502, a playback unit 503, a detection unit 504, and a playback unit 505, as follows:
and (one) a video acquisition unit 501.
The video acquisition unit 501 may be used to acquire a game video.
And (two) a display unit 502.
The display unit 502 may be used to display a video play page and play a game video on the video play page.
And (iii) a playback unit 503.
The playback unit 503 may be configured to display a playback hint area in the video play page when highlight information is received from the server, and the playback hint area may include the highlight information.
(IV) a detection unit 504.
The detection unit 504 may be configured to send a playback request to the server when detecting a playback operation of the user with respect to the playback prompt area.
And (fifth) a play unit 505.
The playing unit 505 may be configured to play the highlight clips on the view playing page when the highlight clips sent by the server are received.
In some embodiments, to enable the user to play and pause the highlight, thereby improving the user experience, the play page includes a play control and a pause control, and the play unit 505 may be configured to:
when receiving the highlight sent by the server, displaying a play control on the visual play page;
when the play operation of the user for the play control is detected, playing the play highlight;
When a pause operation of the user for the pause control is detected, playing of the play highlight is paused.
In some embodiments, to enable the user to perform operations such as slow play, fast play, etc. on the highlight, so as to improve the user experience, the video playing page may include a speed control, and the playing unit 505 may be configured to:
when the slow-release operation of a user on the speed control is detected, performing time extension processing on the highlight to obtain a slowly-released highlight, and playing the slowly-released highlight on a video playing page;
when the fast play operation of the user for the speed control is detected, the time of the highlight is shortened, the fast played highlight is obtained, and the fast played highlight is played on the video playing page.
In some embodiments, in order to enable a user to perform operations such as sharing a highlight, thereby improving the user experience, the video playing page may include a sharing control, and the playing unit 505 may be configured to:
And when the sharing operation of the user aiming at the sharing control is detected, sharing the highlight according to the sharing operation.
In the implementation, each unit may be implemented as an independent entity, or may be implemented as the same entity or several entities in any combination, and the implementation of each unit may be referred to the foregoing method embodiment, which is not described herein again.
As can be seen from the above, the video processing apparatus of the present embodiment obtains the game video by the video obtaining unit; displaying a video playing page by a display unit, and playing a competition video on the video playing page; when the highlight information is received from the server, displaying a playback prompt area in the video play page by the playback unit, wherein the playback prompt area can comprise the highlight information; when detecting the playback operation of the user aiming at the playback prompt area, sending a playback request to a server by a detection unit; when the highlight sent by the server is received, the highlight is played on the visual playing page by the playing unit.
Therefore, the embodiment of the invention can improve the efficiency of video processing.
The embodiment of the invention also provides electronic equipment which can be a terminal, a server and other equipment. The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer and the like; the server may be a single server, a server cluster composed of a plurality of servers, or the like.
In some embodiments, the video processing apparatus may also be integrated in a plurality of electronic devices, for example, the video processing apparatus may be integrated in a plurality of servers, and the video processing method of the present invention is implemented by the plurality of servers.
In this embodiment, a detailed description will be given taking an example in which the electronic device of this embodiment is an electronic device, for example, as shown in fig. 6, which shows a schematic structural diagram of the electronic device according to the embodiment of the present invention, specifically:
the electronic device may include one or more processing cores 'processors 601, one or more computer-readable storage media's memory 602, a power supply 603, an input module 604, and a communication module 605, among other components. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 6 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
The processor 601 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 602, and calling data stored in the memory 602, thereby performing overall monitoring of the electronic device. In some embodiments, processor 601 may include one or more processing cores; in some embodiments, processor 601 may integrate an application processor that primarily handles operating systems, user interfaces, applications, and the like, with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 601.
The memory 602 may be used to store software programs and modules, and the processor 601 may execute various functional applications and data processing by executing the software programs and modules stored in the memory 602. The memory 602 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, the memory 602 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 602 may also include a memory controller to provide access to the memory 602 by the processor 601.
The electronic device also includes a power supply 603 that powers the various components, and in some embodiments, the power supply 603 may be logically connected to the processor 601 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system. The power supply 603 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The electronic device may also include an input module 604, which input module 604 may be used to receive entered numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The electronic device may also include a communication module 605, and in some embodiments the communication module 605 may include a wireless module, through which the electronic device may wirelessly transmit over short distances, thereby providing wireless broadband internet access to the user. For example, the communication module 605 may be used to assist a user in e-mail, browsing web pages, accessing streaming media, and the like.
Although not shown, the electronic device may further include a display unit or the like, which is not described herein.
In some embodiments, the processor 601 in the electronic device loads executable files corresponding to the processes of one or more application programs into the memory 602 according to the following instructions, and the processor 601 executes the application programs stored in the memory 602, so as to implement various functions as follows:
Obtaining a competition video and comment information of spectators aiming at the competition video, wherein a video picture of the competition video can comprise player characters;
Carrying out audience emotion analysis based on comment information to obtain audience emotion indexes of the audience aiming at the competition video;
Performing game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video;
Capturing highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes;
and sending the highlight to the client so that the target terminal plays the highlight.
In some embodiments, the processor 601 in the electronic device loads executable files corresponding to the processes of one or more application programs into the memory 602 according to the following instructions, and the processor 601 executes the application programs stored in the memory 602, so as to implement various functions as follows:
Obtaining video of a game
Displaying a video playing page, and playing a competition video on the video playing page;
when the highlight information is received from the server, displaying a playback prompt area in the video play page, wherein the playback prompt area can comprise the highlight information;
when detecting playback operation of a user aiming at a playback prompt area, sending a playback request to a server;
and when the highlight sent by the server is received, playing the highlight on the visual playing page.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Therefore, the video processing efficiency is improved by the scheme.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present invention provides a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the steps of any one of the video processing methods provided by the embodiments of the present invention.
For example, in some embodiments, the instructions may perform the steps of:
Obtaining a competition video and comment information of spectators aiming at the competition video, wherein a video picture of the competition video can comprise player characters;
Carrying out audience emotion analysis based on comment information to obtain audience emotion indexes of the audience aiming at the competition video;
Performing game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video;
Capturing highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes;
and sending the highlight to the client so that the target terminal plays the highlight.
For example, in some embodiments, the instructions may perform the steps of:
Obtaining video of a game
Displaying a video playing page, and playing a competition video on the video playing page;
when the highlight information is received from the server, displaying a playback prompt area in the video play page, wherein the playback prompt area can comprise the highlight information;
when detecting playback operation of a user aiming at a playback prompt area, sending a playback request to a server;
and when the highlight sent by the server is received, playing the highlight on the visual playing page.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The instructions stored in the storage medium may perform steps in any video processing method provided by the embodiments of the present invention, so that the beneficial effects that any video processing method provided by the embodiments of the present invention can be achieved, which are detailed in the previous embodiments and are not described herein.
The foregoing has described in detail a video processing method, apparatus, electronic device and computer readable storage medium according to embodiments of the present invention, and specific examples have been applied to illustrate the principles and embodiments of the present invention, where the foregoing examples are only for aiding in the understanding of the method and core concept of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present invention, the present description should not be construed as limiting the present invention.

Claims (14)

1. A video processing method, comprising:
Obtaining a competition video and comment information of spectators aiming at the competition video, wherein a video picture of the competition video comprises player characters;
Performing audience emotion analysis based on the comment information to obtain an audience emotion index of the audience aiming at the match video;
performing game interaction analysis based on the game video to obtain player interaction indexes among player characters in the game video;
capturing highlight clips in the match video according to the audience emotion indexes and the player interaction indexes;
sending the highlight to a client so that the target terminal plays the highlight;
the game interaction analysis is performed based on the game video to obtain a player interaction index between player characters in the game video, including:
Identifying the identifier of the competition video, identifying the player identifier corresponding to the player character in the competition video, and determining the position information corresponding to the player identifier;
Determining the role lineup type of the player role according to the player identifier;
based on the position information, calculating hostile distances among player characters belonging to different character camping types;
And determining player interaction indexes among player characters in the competition video according to the hostile distance.
2. The video processing method as claimed in claim 1, wherein said performing identification of the game video, identifying a player identification corresponding to a player character in the game video, and determining location information corresponding to the player identification, comprises:
Determining a region to be compared in a video picture of the competition video, wherein the region to be compared is obtained by sliding a preset player identifier image on the video picture of the competition video;
Performing similarity calculation according to the region to be compared and the preset player identifier image to obtain image similarity between the preset player identifier image and the region to be compared;
Determining a player identifier in a video picture of the competition video according to the image similarity;
and determining the position of the player identifier in the video picture of the competition video to obtain the position information corresponding to the player identifier.
3. The video processing method of claim 1, wherein the determining, from the player identifier, a character lineup type to which the player character belongs comprises:
Performing color analysis on the player identifier to determine color information of the player identifier;
and determining the role lineup type of the player role corresponding to the player identifier according to the color information.
4. The video processing method of claim 1, wherein said determining a player interaction index between player characters in the tournament video based on the hostile distance comprises:
when the hostile distance is smaller than a preset hostile distance threshold, determining the video picture size of the competition video;
calculating the density of the hostile players among the player characters in the competition video according to the hostile distance and the video picture size;
and determining player interaction indexes among player characters in the competition video according to the hostile player density.
5. The method of claim 1, wherein the performing a game interaction analysis based on the game video to obtain a player interaction index between player characters in the game video comprises:
Performing player character recognition on the competition video by adopting an image recognition model to obtain position information of player characters in the competition video and character array types of the player characters;
based on the position information, calculating hostile distances among player characters belonging to different character camping types;
And determining player interaction indexes among player characters in the competition video according to the hostile distance.
6. The video processing method as set forth in claim 5, wherein said performing player character recognition on the match video using the image recognition model to obtain location information of a player character in the match video and a character lineup type to which the player character belongs includes:
Acquiring a training image, wherein the training image comprises player characters marked with position information and character lineup types;
Performing model training on a preset model by adopting the training image until the preset model converges to obtain an image recognition model;
extracting image features of the competition video by adopting the image recognition model, and determining pixel types of image pixels in the competition video according to the image features;
and determining a player character in the competition video according to the pixel type of the image pixels in the competition video, and determining the position information of the player character and the character array type of the player character.
7. The video processing method of claim 1, wherein said capturing highlight segments in said game video based on said audience emotion index and said player interaction index comprises:
carrying out weighted summation on the audience emotion indexes and the player interaction indexes, and calculating to obtain precision;
And cutting out highlight in the competition video according to the highlight degree.
8. The video processing method of claim 7, wherein the game video includes a plurality of video clips, capturing highlight clips in the game video according to the level of highlighting, comprising:
Determining a video segment with the highlight higher than a preset highlight threshold value in the match video as a segment to be intercepted;
Carrying out index change trend analysis according to the player interaction index corresponding to the to-be-intercepted segment to obtain a player interaction index change trend of the to-be-intercepted segment;
and determining the highlight fragment in the fragment to be intercepted according to the change trend of the player interaction index of the fragment to be intercepted.
9. The video processing method of claim 1, wherein the comment information includes a plurality of comment sentences, wherein performing a viewer emotion analysis based on the comment information to obtain a viewer emotion index of the viewer for the game video includes:
performing word segmentation processing on the comment sentences to obtain comment sentence word segmentation;
determining the emotion position of the comment sentence segmentation in a preset dictionary;
determining the emotion state of the comment sentence segmentation according to the emotion position;
Counting the emotion states of the comment sentence segmentation in the comment sentences to obtain the emotion states of the comment sentences;
And determining the audience emotion indexes of the audience aiming at the competition video according to the emotion states of the comment sentences.
10. The video processing method of claim 1, wherein the sending the highlight to a client for the target terminal to play the highlight comprises:
generating highlight piece information according to the highlight piece;
sending the highlight information to a client so that the client displays a playback prompt area when playing the game video in a view playing page, wherein the playback prompt area comprises the highlight information;
And when receiving a playback request sent by the client, sending a highlight corresponding to the highlight information to the client so that the client plays the highlight on the visual play page.
11. A video processing method, comprising:
Obtaining a competition video;
Displaying a video playing page, and playing the competition video on the video playing page;
when receiving highlight information from a server, displaying a playback prompt area in the video play page, wherein the playback prompt area comprises the highlight information;
When detecting the playback operation of the user aiming at the playback prompt area, sending a playback request to a server;
when a highlight sent by a server is received, playing the highlight on the video playing page;
Wherein, the highlight is obtained by the service end based on the following steps:
Acquiring the competition video and comment information of spectators aiming at the competition video, wherein a video picture of the competition video comprises player characters;
Performing audience emotion analysis based on the comment information to obtain an audience emotion index of the audience aiming at the match video;
Identifying the identifier of the competition video, identifying the player identifier corresponding to the player character in the competition video, and determining the position information corresponding to the player identifier;
Determining the role lineup type of the player role according to the player identifier;
based on the position information, calculating hostile distances among player characters belonging to different character camping types;
determining a player interaction index between player characters in the competition video according to the hostile distance;
capturing highlight clips in the match video according to the audience emotion indexes and the player interaction indexes;
and sending the highlight to the client so that the target terminal plays the highlight.
12. A video processing apparatus, comprising:
The system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a competition video and comment information of spectators aiming at the competition video, and a video picture of the competition video comprises player characters;
The emotion unit is used for carrying out emotion analysis of the audience based on the evaluation information to obtain an emotion index of the audience aiming at the match video;
The interaction unit is used for carrying out competition interaction analysis based on the competition video to obtain player interaction indexes among player characters in the competition video;
The intercepting unit is used for intercepting highlight clips in the competition video according to the audience emotion indexes and the player interaction indexes;
A sending unit, configured to send the highlight to a client, so that a target terminal plays the highlight;
The interaction unit is used for:
Identifying the identifier of the competition video, identifying the player identifier corresponding to the player character in the competition video, and determining the position information corresponding to the player identifier;
Determining the role lineup type of the player role according to the player identifier;
based on the position information, calculating hostile distances among player characters belonging to different character camping types;
And determining player interaction indexes among player characters in the competition video according to the hostile distance.
13. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps in the video processing method according to any one of claims 1 to 11.
14. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in the video processing method of any one of claims 1 to 11.
CN202010203456.3A 2020-03-20 2020-03-20 Video processing method, device, electronic equipment and storage medium Active CN113497946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010203456.3A CN113497946B (en) 2020-03-20 2020-03-20 Video processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010203456.3A CN113497946B (en) 2020-03-20 2020-03-20 Video processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113497946A CN113497946A (en) 2021-10-12
CN113497946B true CN113497946B (en) 2024-05-31

Family

ID=77994275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010203456.3A Active CN113497946B (en) 2020-03-20 2020-03-20 Video processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113497946B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339368B (en) * 2021-11-24 2023-04-14 腾讯科技(深圳)有限公司 Display method, device and equipment for live event and storage medium
CN114780180A (en) * 2021-12-21 2022-07-22 北京达佳互联信息技术有限公司 Object data display method and device, electronic equipment and storage medium
CN114339304A (en) * 2021-12-22 2022-04-12 中国电信股份有限公司 Live video processing method and device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8409000B1 (en) * 2012-03-09 2013-04-02 Hulu Llc Configuring advertisements in a video segment based on a game result
CN106303675A (en) * 2016-08-24 2017-01-04 北京奇艺世纪科技有限公司 A kind of video segment extracting method and device
CN106851327A (en) * 2016-12-31 2017-06-13 天脉聚源(北京)科技有限公司 The method and apparatus for reviewing interaction
CN108295468A (en) * 2018-02-28 2018-07-20 网易(杭州)网络有限公司 Information processing method, equipment and the storage medium of game
CN108537139A (en) * 2018-03-20 2018-09-14 校宝在线(杭州)科技股份有限公司 A kind of Online Video wonderful analysis method based on barrage information
CN108694236A (en) * 2018-05-11 2018-10-23 优视科技有限公司 Video data handling procedure, device and electronic equipment
WO2019232094A1 (en) * 2018-05-31 2019-12-05 Sony Interactive Entertainment LLC Challenge game system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8409000B1 (en) * 2012-03-09 2013-04-02 Hulu Llc Configuring advertisements in a video segment based on a game result
CN106303675A (en) * 2016-08-24 2017-01-04 北京奇艺世纪科技有限公司 A kind of video segment extracting method and device
CN106851327A (en) * 2016-12-31 2017-06-13 天脉聚源(北京)科技有限公司 The method and apparatus for reviewing interaction
CN108295468A (en) * 2018-02-28 2018-07-20 网易(杭州)网络有限公司 Information processing method, equipment and the storage medium of game
CN108537139A (en) * 2018-03-20 2018-09-14 校宝在线(杭州)科技股份有限公司 A kind of Online Video wonderful analysis method based on barrage information
CN108694236A (en) * 2018-05-11 2018-10-23 优视科技有限公司 Video data handling procedure, device and electronic equipment
WO2019232094A1 (en) * 2018-05-31 2019-12-05 Sony Interactive Entertainment LLC Challenge game system

Also Published As

Publication number Publication date
CN113497946A (en) 2021-10-12

Similar Documents

Publication Publication Date Title
Chen et al. What comprises a good talking-head video generation?: A survey and benchmark
CN113497946B (en) Video processing method, device, electronic equipment and storage medium
US20230012732A1 (en) Video data processing method and apparatus, device, and medium
CN111491173B (en) Live cover determination method and device, computer equipment and storage medium
CN106462744B (en) Rule-based video importance analysis
WO2022184117A1 (en) Deep learning-based video clipping method, related device, and storage medium
CN110557659B (en) Video recommendation method and device, server and storage medium
CN111723784B (en) Risk video identification method and device and electronic equipment
CN109063662B (en) Data processing method, device, equipment and storage medium
WO2009113054A1 (en) Technological platform for gaming
CN111768478B (en) Image synthesis method and device, storage medium and electronic equipment
CN110677685B (en) Network live broadcast display method and device
JP7273100B2 (en) Generation of text tags from game communication transcripts
CN111405360A (en) Video processing method and device, electronic equipment and storage medium
CN112232258A (en) Information processing method and device and computer readable storage medium
CN112287848A (en) Live broadcast-based image processing method and device, electronic equipment and storage medium
CN113392690A (en) Video semantic annotation method, device, equipment and storage medium
CN114095742A (en) Video recommendation method and device, computer equipment and storage medium
CN111491179B (en) Game video editing method and device
CN113573128B (en) Audio processing method, device, terminal and storage medium
CN113610953A (en) Information processing method and device and computer readable storage medium
CN111783587A (en) Interaction method, device and storage medium
KR102460595B1 (en) Method and apparatus for providing real-time chat service in game broadcasting
CN112131426B (en) Game teaching video recommendation method and device, electronic equipment and storage medium
CN110574066B (en) Server device and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant