CN111013150A - Game video editing method, device, equipment and storage medium - Google Patents

Game video editing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111013150A
CN111013150A CN201911252604.4A CN201911252604A CN111013150A CN 111013150 A CN111013150 A CN 111013150A CN 201911252604 A CN201911252604 A CN 201911252604A CN 111013150 A CN111013150 A CN 111013150A
Authority
CN
China
Prior art keywords
game video
frames
frame
detected
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911252604.4A
Other languages
Chinese (zh)
Other versions
CN111013150B (en
Inventor
李廷天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911252604.4A priority Critical patent/CN111013150B/en
Publication of CN111013150A publication Critical patent/CN111013150A/en
Application granted granted Critical
Publication of CN111013150B publication Critical patent/CN111013150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a game video editing method, a game video editing device, game video editing equipment and a storage medium, wherein the method comprises the following steps: acquiring a game video to be edited, and determining a target object in the game video to be edited; performing frame extraction on a game video to be edited to obtain corresponding frames to be detected, and detecting image characteristic differences of a target object between regional pictures on different frames to be detected; determining a target frame meeting conditions in the game video to be edited based on the image characteristic difference; and intercepting the target frame from the game video to be edited to obtain a game video brocade set. According to the method and the device, the target frame needing to be intercepted is determined by detecting the image characteristic difference of the target object between the regional pictures of different video frames, the process does not need manual intervention, the labor cost and the time cost are reduced, the target frame obtained based on the image characteristic difference can accurately reflect the change of the target object on the picture, and the game video editing precision is effectively improved.

Description

Game video editing method, device, equipment and storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for editing a game video.
Background
Currently, with the electronic contests and the explosion of the gaming industry, more and more game videos are shared to various new media platforms for viewing by other users in the platforms.
For large games, the game video generated by a game player during a tournament is typically a very long video. The video is long in time, has a large number of boring pictures, and is not suitable for being spread and shared.
For this reason, before the game video is shared, the game video is usually edited. However, in the existing game video clipping schemes, the clipping operation is mainly completed manually, and the game video needs to be browsed manually, and then video segments of interest in the game video are identified manually and clipped. The process consumes a great amount of labor cost and time cost, and the condition of missing or multiple clipping of game pictures due to human negligence is easy to occur, so that the accuracy of game video clipping is poor.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a game video editing method, device, apparatus and storage medium, which can reduce labor cost and time cost and improve game video editing accuracy.
The specific scheme is as follows:
a first aspect of the present application provides a game video clipping method, comprising:
acquiring a game video to be clipped, and determining a target object in the game video to be clipped;
extracting frames of the game video to be edited to obtain corresponding frames to be detected, and detecting image characteristic differences of the target object among regional pictures on different frames to be detected;
determining a target frame meeting conditions in the game video to be edited based on the image feature difference;
and intercepting the target frame from the game video to be edited to obtain a game video highlights set.
A second aspect of the present application provides a game video clip device comprising:
the video acquisition module is used for acquiring a game video to be edited;
the object determining module is used for determining a target object in the game video to be clipped;
the video frame extracting module is used for extracting frames of the game video to be edited so as to obtain corresponding frames to be detected;
the difference detection module is used for detecting the image characteristic difference of the target object between the area pictures on different frames to be detected;
the target frame determining module is used for determining a target frame meeting conditions in the game video to be edited based on the image feature difference;
and the video intercepting module is used for intercepting the target frame from the game video to be edited so as to obtain a game video brocade set.
A third aspect of the application provides an electronic device comprising a processor and a memory; wherein the memory is for storing a computer program that is loaded and executed by the processor to implement the aforementioned game video clip method.
A fourth aspect of the present application provides a storage medium having stored therein computer-executable instructions that, when loaded and executed by a processor, implement the aforementioned game video clip method.
According to the method, a target object is determined from a game video to be clipped, then image feature differences of the target object among area pictures of different frames to be clipped are detected, the target frame in the game video to be clipped is determined based on the image feature differences, and the target frame is captured from the game video to be clipped, so that a corresponding game video collection is obtained. Therefore, the target frame needing to be intercepted is determined by detecting the image characteristic difference of the target object between the regional pictures of different video frames, the process does not need manual intervention, the labor cost and the time cost are reduced, and the image characteristic difference can reflect the regional picture change condition of the target object on different video frames, so that the target frame obtained based on the image characteristic difference can accurately reflect the change of the target object on the picture, and the game video editing precision is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic diagram of a system framework to which the game video clip scheme provided herein is applicable;
FIG. 2 is a flow chart of a game video editing method provided by the present application;
FIG. 3 is a schematic view of a hero character and its corresponding blood bars;
FIG. 4 is a schematic diagram of a hero character and its corresponding blood bars and character level identification;
FIG. 5 is a schematic diagram illustrating the posture changes caused by the hero character in combat provided by the present application;
FIG. 6 is a schematic diagram illustrating the brightness variation caused by the hero character in battle;
FIG. 7 is a schematic diagram illustrating hue changes caused by the hero character in battle;
FIG. 8 is a flow chart of a particular game video clipping method provided by the present application;
FIG. 9 is a flowchart of a specific game video clipping method provided by the present application;
FIG. 10 is a flow chart of a particular game video clipping method provided by the present application;
FIG. 11 is a flowchart of a particular game video clipping method provided herein;
FIG. 12 is a schematic diagram of a game video editing apparatus according to the present application;
fig. 13 is a block diagram of an electronic device provided in the present application.
Detailed Description
In the prior art, in order to reduce the game video watching time of a user and promote a game, an original game video needs to be edited in a manual editing mode to obtain a game video collection suitable for sharing. When the editing is carried out in a manual editing mode, editing personnel are required to browse all video contents, and highlight segments such as fighting pictures and the like are required to be marked in the browsing process, a large amount of labor cost and time cost are consumed in the process, and in the process that some editing personnel manually select the segments, on one hand, due to the fact that the head and the tail of the segments are selected subjectively, the editing precision is influenced, on the other hand, the video contents are selected to be browsed in a jumping mode in order to accelerate the editing speed, so that some common and dull game pictures are also edited into a highlight set, or some highlight game pictures are omitted, and the editing effect is influenced.
Therefore, the existing game video editing mode needs to consume more labor cost and time cost, and the editing result is easily interfered by human subjective factors, so that the editing precision is low. In order to overcome the technical problem, the application provides a novel game video editing scheme, and the effects of saving the labor cost and the time cost when the game video is manually edited and improving the editing precision can be achieved.
In the game video clip scheme of the present application, a system framework adopted may be as shown in fig. 1, and specifically may include an original video database 01 and a game video clip platform 02, and may further include a new media platform 03.
The original video database 01 is specifically configured to store an original game video generated by a game player in a game match, and send the original game video to the game video clipping platform 02 to serve as a game video to be clipped. After the game video editing platform 02 acquires the game video to be edited, which is sent by the original video database 12, the game video to be edited may be edited based on the specific game video editing method provided in the present application, so as to obtain a game video highlights set, and in addition, the game video highlights set may be uploaded to the new media platform 03 under the condition of user triggering or automatic triggering. After receiving the game video highlights sent by the game video clip platform 02, the new media platform 03 can display the game video highlights on the platform, and after acquiring a video highlight watching request initiated by a user, play a corresponding game video highlights through a video player.
There may be a variety of specific examples for the positional relationship between the original video database 01, the game video clip platform 02, and the new media platform 03.
In one embodiment, raw video database 01 and game video clip platform 02 are both located on a handheld terminal of a game player. In the process that a game player carries out game competition through the handheld terminal, the handheld terminal can record corresponding game videos and store the corresponding game videos into the local original video database 01. Under the trigger of the user, the handheld terminal can input the original game video in the original video database 01 to the local game video clip platform 02 to obtain the game video highlights. When a game player wants to share a certain game video brocade set, the corresponding game video brocade set can be remotely sent to the new media platform 03 through the handheld terminal, so that the user of the new media platform 03 can watch the game video brocade set shared by the game player.
In another specific example, raw video database 01 and game video clip platform 02 are both located in the same server. The server receives and stores original game videos uploaded by each game player, clips the original game videos through the local game video clipping platform 02 to obtain corresponding game video brocade sets, and displays the game video brocade sets through the new media platform 03 to share the game video brocade sets to users of the new media platform 03.
In another embodiment, the original video database 01 is located on a handheld terminal or a server of a user, and the game editing platform 02 and the new media platform 03 are located on another server, so that the game video collection obtained by editing the original game video by the game editing platform 02 can be directly displayed by the local new media platform 03 without remote data transmission, which is equivalent to integrating the game editing function for the new media platform 03.
Of course, the original video database 01, the game video clip platform 02 and the new media platform 13 may also be located on different servers, respectively, and maintained and managed by different enterprises, and finally the game video highlights are displayed through the video client connected to the new media platform 03.
Fig. 2 is a game video clipping method according to an embodiment of the present application. Referring to fig. 2, the game video clipping method may include the steps of:
and step S11, acquiring the game video to be clipped, and determining the target object in the game video to be clipped.
In this embodiment, the original game video may be acquired in a manner of receiving data remotely or in a manner of recording a video of a game match locally, so as to be used as the game video to be edited.
When the number of the acquired game videos to be edited is very large, in order to improve the promotion efficiency and the sharing effect of the game, the present embodiment may first acquire the game competition level, the level of the participating player, the scale of the participating player, and the participating time corresponding to each game video to be edited, then calculate the editing priority of each game video to be edited according to the game competition level, the level of the participating player, the scale of the participating player, and the participating time, and sequentially edit each game video to be edited based on the height of the editing priority. It will be appreciated that the higher the level of the game tournament, the higher the level of the participating players, the larger the size of the participating players, and the closer the participating time is to the current time, the higher the corresponding clip priority.
In this embodiment, when a game video to be clipped is clipped, a target object in the game video to be clipped is determined first. The target object refers to an object in which image characteristics of a picture in a corresponding area in a game video may change with a change in a game time, and includes, but is not limited to, a hero character and a Non-player character (i.e., NPC, Non-player character) manipulated by a player, and the like.
In order to determine the target object in the game video to be clipped, the present embodiment can be implemented by the following ways: acquiring a preset image template, and determining image elements corresponding to the preset image template from the game video to be edited by utilizing a template matching algorithm; determining a target picture area by using the picture area where the image element is located and a predetermined position relation; the position relation is the position relation between the picture area where the image element is located and the picture area where the target object is located; and determining an object positioned on the target picture area as a target object. It should be noted that the above-mentioned image elements specifically refer to image elements which have a position relationship with the target object that is substantially unchanged during the game match and have a fixed outline characteristic, including but not limited to a blood bar for representing a life value above the heads of hero characters, and the like. A hero character and its corresponding blood strips are shown in figure 3. That is, in this embodiment, the image elements need to have fixed outline features and the relative position relationship between the image elements and the target object is kept unchanged, so that the image template determined in advance based on the outline features of the image elements can be used to match the corresponding image elements in the game video to be edited, and then the predetermined relative position relationship between the image elements and the target object is used to determine the picture area where the target object is located, so as to determine the target object in the game video. It is understood that the outline features of the image elements include, but are not limited to, outline shape and outline color, etc.
In addition, considering that the tops of the common characters in the game picture have corresponding blood bars, if the image elements are blood bars, all the characters carrying the blood bars in the game picture are matched through the target object determination mode, so that in the aspect of game promotion effect, because some characters are not the characters concerned by people, if the game pictures of the characters are also edited into a highlight set, the promotion of the game is not facilitated, and even adverse effects can be possibly played. On the other hand, the above indiscriminate role matching method consumes more computation power in terms of computational resource consumption. Therefore, in this embodiment, an object carrying a specific identifier may be further screened out from all target objects obtained based on the template matching algorithm as a final target object. The specific identifier may be a role level identifier, and a role level corresponding to the role level identifier is not less than a preset level. A hero character and its corresponding blood bars and character level identification are shown in fig. 4. In addition, the specific mark can also be an internal filling color mark of the blood strip, such as a red or blue blood strip filling color.
Of course, in order to determine the target object in the game video to be clipped, the embodiment may also be implemented by: creating an area selection interface on a human-computer interaction interface; acquiring area selection information through the area selection interface; selecting a target picture area in the game video to be edited according to the area selection information; and determining an object positioned on the target picture area as a target object. That is, in this embodiment, a region selection interface may be provided on the editing interface in advance, and a game player or a video manager may select a specific screen region through the region selection interface, so that the background may determine an object on the screen region as a target object. For example, the user can select a screen region where the hero character concerned by the user is currently located through the region selection interface, so that the hero character is determined as the target object.
And step S12, performing frame extraction on the game video to be clipped to obtain corresponding frames to be detected, and detecting the image characteristic difference of the target object between the regional pictures on different frames to be detected.
In this embodiment, before detecting the image feature difference between the area pictures corresponding to the target object on the different frames to be detected, in order to reduce the data processing amount, the corresponding frames to be detected may be extracted from the game video to be edited, and then the image feature difference may be detected for the different frames to be detected extracted at each frame extraction event. It is understood that the frame extraction in the present embodiment refers to an operation of extracting a plurality of frames from a video at certain intervals. When the game video to be clipped is subjected to frame drawing, the number of frames drawn each time and the time interval between two adjacent frame drawing events can be determined based on the time length scale of the game video to be clipped. Under the condition that the time interval of two adjacent frame extracting events is kept unchanged, the time length of the game video to be edited and the specific frame extracting times are in positive correlation, namely the shorter the time length of the game video to be edited, the less the frame extracting times can be, and even the frame extracting can be performed only once. If the frame is only drawn once, this is usually the case to share the picture effect of hero character in using a certain skill, where the picture duration is very short. In addition, in order to ensure the clipping precision of the game video highlights, the time interval between two adjacent frame extraction events can be set to be a small value. Further, the number of frames extracted in each frame extracting event is usually two frames, for example, two adjacent frames or two frames spaced apart from each other.
The image characteristic difference of the target object between the corresponding area pictures on different frames to be detected can reflect the dynamic change condition of the area pictures of the target object. Because of the wonderful pictures in the game, the pictures are usually pictures with quickly changing picture information, such as battle pictures when actively launching an attack or suffering from an external attack. Therefore, by detecting the image characteristic difference between different video frames and then based on the magnitude degree of the image characteristic difference, wonderful pictures such as fighting pictures and the like can be effectively distinguished.
When the hero character is in battle, the fighting action has a larger range of posture change than the walking and turning action in the non-fighting state, as shown in fig. 5, thereby causing a drastic change in the direction of the gradient in the picture of the area where the hero character is located. For this purpose, the difference of gradient features on the gradient histogram can be used to detect whether the hero character is in a fighting state. Wherein, the gradient characteristic difference can be specifically expressed as:
Figure BDA0002309448330000071
in the formula (I), the compound is shown in the specification,
Figure BDA0002309448330000072
and
Figure BDA0002309448330000073
representing a previous video frame ItThe region picture of the target object and the next frame video frame It+1The gradient of the area picture of the target object in the x-axis direction,
Figure BDA0002309448330000074
and
Figure BDA0002309448330000075
representing a previous video frame ItThe region picture of the target object and the next frame video frame It+1The gradient of the region picture in which the target object is located in the y-axis direction, Hist represents a histogram, DGRepresenting gradient characteristic difference, | ·| non-woven phosphor1Representing a 1-norm.
In addition, since the hero character is not limited to launching a common attack to the opponent when fighting, a light beam can be released by using skill to launch a remote attack to the opponent. Among them, a general attack more easily causes a change in gradient histogram, and a skill-based remote attack more easily causes a change in brightness or hue.
Fig. 6 shows the change of the brightness of the screen caused by the hero character when fighting. The change in brightness occurs because the brightness of the light beam released by the attack is very high, resulting in a significant difference in brightness between the picture before and after the release. For this purpose, it is possible to detect whether the hero character is in a fighting state by using the luminance feature difference on the luminance histogram. The brightness feature difference may be specifically expressed as:
Figure BDA0002309448330000081
where V denotes the luminance in the HSV (i.e., Hue, Saturation) color model, which is equal to max (R, G, B), R, G, B denote the values of three channels of image colors,
Figure BDA0002309448330000082
representing a previous video frame ItThe brightness of the picture of the area where the target object is located,
Figure BDA0002309448330000083
representing the next video frame It+1Hist represents the histogram, DVRepresenting the difference in brightness characteristics, | ·| non-woven vision1Representing a 1-norm.
Fig. 7 shows a change in picture hue caused by a hero character when fighting. The change of the hue is caused by that the speed of light released during the attack is not bright, but the color is changed, so that the picture has a significant difference in hue before and after the release. For this purpose, it can be detected whether the hero character is in a fighting state by using hue feature differences on the hue histogram. The hue characteristic difference may be specifically expressed as:
Figure BDA0002309448330000084
Figure BDA0002309448330000085
where H denotes a hue in the HSV color model, and if H < 0, it is added by 360, i.e., H ═ H +360,
Figure BDA0002309448330000086
representing a previous video frame ItThe hue of the picture of the region where the target object is located,
Figure BDA0002309448330000091
representing the next video frame It+1Hist represents a histogram, DHRepresenting the difference of hue characteristics, | ·| non-woven vision1Representing a 1-norm.
It should be noted that, in the implementation process, the role action type preferred by the viewer may be determined, and then the detection is performed by determining which image feature difference is specifically utilized based on the role action type preferred by the viewer. For example, if the viewer especially likes a skill-based remote attack action of hero characters, the detection may be developed based on the luminance characteristic difference and the hue characteristic difference, and if the viewer especially likes a general attack action of hero characters, the detection may be developed based on the gradient characteristic difference.
And step S13, determining target frames meeting the conditions in the game video to be clipped based on the image feature difference.
Specifically, the image scene type corresponding to the corresponding frame to be detected is determined by judging the size relationship between the image feature difference and the preset difference threshold, and then the target frame which is subsequently edited to the game video collection is determined based on the frame to be detected with the image scene type in accordance with the preset type.
For example, assuming that the predetermined type is a battle scene, the image feature difference is the gradient feature difference DGIn this case, the corresponding gradient feature difference threshold may be set to 20 in advance, when the gradient feature difference D is detectedGAnd if the frame number is more than 20, determining that the picture scene type corresponding to the corresponding frame to be detected is a battle scene, and determining a target frame according to the frame to be detected. The difference of image features is the above-mentioned difference D of brightness featuresVIn this case, the corresponding brightness characteristic difference threshold may be set to 20 in advance, and when the brightness characteristic difference D is smallerVAnd if the frame number is more than 20, determining that the picture scene type corresponding to the corresponding frame to be detected is a battle scene, and determining a target frame according to the frame to be detected. The difference of image characteristics is the hue characteristic difference DHIn this case, the hue characteristic difference threshold corresponding to the hue characteristic difference can be set to 12 in advance, and when the hue characteristic difference D is smaller than the threshold, the hue characteristic difference is determined to be 12HAnd if the frame number is more than 12, determining that the picture scene type corresponding to the corresponding frame to be detected is a battle scene, and determining a target frame according to the frame to be detected.
It should be noted that, in this embodiment, the condition for determining the target frame may further include other conditions besides the condition that the picture scene type matches the preset type, for example, the condition that the total time length of adding the target frame is less than or equal to the preset time length, so that the total time length of the finally obtained game video highlights is not too long, which is beneficial to further improving the viewing experience of the audience.
And step S14, intercepting the target frame from the game video to be clipped to obtain a game video brocade set.
In this embodiment, after the target frame is determined, it is equivalent to locating which video segments are to be clipped in the game video highlights in the game video to be clipped. Therefore, the target frames can be directly intercepted from the game video to be edited so as to form a final game video collection.
Therefore, in the embodiment, the target object in the game video to be edited is determined first to locate the target object, the picture area where the target object is located is found, then the image characteristic change condition of the picture area where the hero character is located in the front frame and the back frame is directly detected, and whether the corresponding picture scene type is a battle scene or not can be directly judged according to the image characteristic change condition, so that the target frame to be edited subsequently is obtained. Compared with the existing scheme based on manual clipping, the process can reduce the labor cost and the time cost and effectively improve the accuracy of game video clipping. Compared with a supervision learning detection scheme based on a deep neural network model, the embodiment does not need to collect massive samples of hero states of massive hero characters under different skin appearance modules, different orientations, different skills and common attack behaviors from massive game videos in advance, and does not need to use massive training samples to perform complicated model training, so that the use cost of the scheme can be greatly reduced.
Fig. 8 is a specific game video clipping method according to an embodiment of the present application. Referring to fig. 8, the game video clipping method may include the steps of:
and step S21, acquiring the game video to be clipped, and determining the target object in the game video to be clipped.
It should be noted that, for the process of determining the target object in the game video to be clipped in step S21, reference may be specifically made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
And step S22, performing frame extraction on the game video to be clipped once to obtain two frames to be detected.
In this embodiment, if the time length of the game video to be clipped is relatively short, the frame extraction may be performed only once on the game video to be clipped, so as to obtain two frames to be detected. The two frames to be detected may be two adjacent frames of video frames, or two frames of video frames with a proper time interval.
And step S23, detecting the image characteristic difference of the target object between the area pictures of the two frames to be detected.
In this embodiment, before detecting the image feature difference, it is necessary to determine the region pictures corresponding to the target object on the two frames of frames to be detected, to obtain the two region pictures, and then detect the image feature difference between the two region pictures.
If the two frames of frames to be detected are two adjacent frames of video frames, the process of determining the region pictures corresponding to the target object on the two frames of frames to be detected may specifically include: and determining a first picture area of the target object on a first frame to be detected, and determining the first picture area as a picture area of the target object on a second frame to be detected to obtain a second picture area. That is, if the two frames of frames to be detected are two adjacent frames of video frames, the movement of the region picture of the target object on the two frames of frames to be detected is very small and can be ignored, so that the first picture region can be directly determined as the picture region of the target object on the second frame of frames to be detected. As for how to determine the first picture area of the target object on the frame to be detected in the first frame, the determination may be performed in a template matching manner or a manual selection manner disclosed in the foregoing embodiment, which is not described herein again.
In addition, if the two frames of frames to be detected are two frames of video frames with appropriate time intervals, the process of determining the region pictures corresponding to the target object on the two frames of frames to be detected may specifically include: determining a first picture area of the target object on a first frame to be detected, identifying an identity mark, such as an account name of a player of a hero character, which dynamically follows the first picture area and is used for representing identity information of the target object on a picture, and then identifying the picture area where the target object is on a second frame to be detected based on the identity mark so as to realize dynamic tracking of the target object.
And step S24, determining the picture scene type corresponding to the corresponding frame to be detected by using the image characteristic difference.
In this embodiment, by comparing the image feature difference with the corresponding preset difference threshold, it can be determined whether the picture scene type corresponding to the corresponding video frame is a battle scene or a non-battle scene, and if the picture scene is a battle scene, it can be further determined whether the picture scene is a conventional attack scene or a skill attack scene, and the like.
And step S25, judging whether the picture scene type is consistent with a preset type, and if so, determining a target frame in the game video to be clipped based on the two frames of frames to be detected.
In this embodiment, the target frame is determined based on two frames to be detected obtained from one frame extraction event. In a specific embodiment, all video frames between the two frames to be detected whose picture scene type matches a preset type may be determined as target frames. That is, if the picture scene types corresponding to the two frames of frames to be detected are found to be consistent with the preset type, the picture scene types of all other video frames between the two frames to be detected can be considered to be consistent with the preset type, so that the data calculation amount can be reduced to a certain extent.
And step S26, intercepting the target frame from the game video to be clipped to obtain a game video brocade set.
Therefore, according to the embodiment, the two frames of video frames obtained by one frame extracting event can be used for determining the target frame which can be edited into the brocade set subsequently from the game video to be edited, and the embodiment is generally only suitable for editing the game video with short time length.
Fig. 9 is another specific game video clipping method provided in the embodiment of the present application. Referring to fig. 9, the game video clipping method may include the steps of:
and step S31, acquiring the game video to be clipped, and determining the target object in the game video to be clipped.
It should be noted that, for the process of determining the target object in the game video to be clipped in step S31, reference may be specifically made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
S32, performing frame extraction on the game video to be edited for multiple times based on a preset frame extraction principle to obtain multiple groups of frames to be detected; the preset frame extraction principle comprises a time interval of every two adjacent frame extraction events and two frames of the extracted frames in each frame extraction event.
In this embodiment, before performing frame extraction on a game video to be edited, a frame extraction principle is determined, including a time interval between every two adjacent frame extraction events and two frames extracted in each frame extraction event. And then performing frame extraction processing on the game video to be edited for multiple times according to the preset frame extraction principle to obtain multiple groups of frames to be detected, wherein each group of frames to be detected comprises two frames to be detected, and the two frames to be detected in each group of frames to be detected can be adjacent two frames of video frames or two frames of video frames with proper time intervals.
If the clipping accuracy is improved as much as possible, the two frames to be detected in each group of frames to be detected are preferably adjacent two frames of video frames, and the time interval between two adjacent frame extraction events is set to zero. However, considering that the calculation overhead of the above scheme is relatively large, in order to reduce the calculation overhead when a certain clipping accuracy is ensured, the time interval between the two adjacent frame extraction events may be set to a non-zero value, and the specific value may vary from several seconds to several minutes according to the actual accuracy requirement.
And step S33, respectively detecting the image characteristic difference of the target object between the area pictures of the two frames of the frames to be detected in each group of frames to be detected, and obtaining a plurality of groups of image characteristic differences.
In this embodiment, before detecting the image feature difference corresponding to any one group of frames to be detected, it is necessary to determine the region pictures corresponding to the target object on two frames of frames to be detected of the group of frames to be detected, to obtain the two region pictures, and then detect the image feature difference between the two region pictures.
For the above process of determining the region pictures corresponding to the target object on the two frames to be detected, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
And step S34, determining the picture scene type corresponding to the corresponding frame to be detected by using the difference of the image characteristics of each group respectively to obtain a plurality of picture scene types.
For the process of determining the picture scene type corresponding to the frame to be detected by using the image feature difference, reference may be specifically made to the corresponding contents disclosed in the foregoing embodiments, which are not described herein again.
And step S35, clustering the multiple picture scene types by using a DBSCAN algorithm.
In this embodiment, the multiple scene types of the picture obtained in step S34 are clustered by using a DBSCAN (Density-Based Spatial Clustering of applications with Noise) algorithm to obtain one or more clusters. Wherein the respective picture scene types in each cluster are the same and temporally consecutive. It should be noted that the DBSCAN algorithm is a relatively representative density-based clustering algorithm. Unlike the partitioning and hierarchical clustering methods, the DBSCAN algorithm defines clusters as the largest set of density-connected points, can partition areas with sufficiently high density into clusters, and can find clusters of arbitrary shapes in a spatial database of noise.
And step S36, determining a target frame in the game video to be clipped based on the cluster of which the picture scene type in the clustering result is consistent with the preset type.
After obtaining a plurality of clusters through step S35, the present embodiment selects clusters with picture scene types that match the preset types from all clusters to obtain a plurality of target clusters, and then determines corresponding target frames from the game video to be edited based on each target cluster.
It should be noted that, for a target cluster, the process of determining a corresponding target frame from a game video to be clipped may specifically include: all the frames to be detected corresponding to all the picture scene types in the target cluster are directly determined as the target frames corresponding to the target cluster, and the target frames obtained by the implementation mode are intermittent rather than continuous on the time axis, but have the advantage of less occupied data storage space.
In order to ensure that the target frames corresponding to each target cluster are continuous on the time axis, in this embodiment, for one target cluster, the process of determining the corresponding target frame from the game video to be clipped may specifically include: and determining a first frame to be detected with the earliest playing time and a second frame to be detected with the latest playing time from all the frames to be detected corresponding to the target cluster, and then determining a video frame between the first frame to be detected and the second frame to be detected in the game video to be detected as a target frame.
And step S37, intercepting the target frame from the game video to be clipped to obtain a game video brocade set.
It can be understood that if the number of clusters in the clustering result, in which the scene type of the picture matches the preset type, is N, N groups of target frames are correspondingly obtained, and when the target frames are subsequently captured from the game video to be clipped, N pieces of game video highlights are obtained. Of course, the embodiment may further integrate the N game video pieces into one piece, so as to obtain one game video piece.
Therefore, in the embodiment, frames of the game video to be clipped are extracted for multiple times to obtain multiple groups of frames to be detected, multiple picture scene types are determined based on the multiple groups of frames to be detected, the multiple picture scene types are clustered based on the DBSCAN algorithm, and finally, target frames which can be subsequently clipped into a highlight are determined based on clusters in a clustering result. The scheme in the embodiment is more suitable for clipping game videos with longer time length.
Fig. 10 is another specific game video clipping method provided in the embodiment of the present application. Referring to fig. 10, the game video clipping method may include the steps of:
and step S41, acquiring the game video to be clipped, and determining the target object in the game video to be clipped.
And step S42, performing frame extraction on the game video to be clipped to obtain corresponding frames to be detected, and detecting the image characteristic difference of the target object between the regional pictures on different frames to be detected.
And step S43, determining target frames meeting the conditions in the game video to be clipped based on the image feature difference.
In this embodiment, regarding the specific processes of the steps S41 to S43, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated herein.
And step S44, counting the total time length of the target frame.
And step S45, if the total time length is greater than the preset time length, screening out the video frames to be eliminated from the target frames based on the video frame priority.
In this embodiment, if the total time length of the target frame is too long, in order to further reduce the viewing time of the user and improve the viewing experience on the premise of not seriously affecting the quality of the brocade aggregate content, the priority of each frame of video frames in the target frame may be determined first, and then the video frames with lower priorities and the total time length greater than the preset time length are determined as the video frames to be removed.
In order to determine the priority of each frame of video in the target frame, the present embodiment may adopt the following manners: and determining the priority of each frame of video according to the role grade identification of the hero roles in the video frame, the fighting action type, the total number of the hero roles in the video frame and the like. It can be understood that the higher the role grade mark is, the higher the priority of the video frame can be raised; compared with the action generated by common attack, the action generated by using skill is more beneficial to improving the priority of the video frame; the more the total number of hero characters in the video frame is, the higher the priority of the video frame can be.
And step S46, removing the video frame to be removed from the target frame to obtain an optimized target frame.
In this embodiment, the video frames with lower priority and the total time length longer than the preset time length may be specifically determined as the video frames to be removed, so that after the video frames to be removed are removed from the target frames, the target frames with the total time length consistent with the preset time length may be obtained.
And step S47, intercepting the target frame from the game video to be clipped to obtain a game video brocade set.
It is understood that the target frame in step S47 is specifically an optimized target frame if the target frame is subjected to the optimization process, and the target frame in step S47 is specifically an original target frame if the target frame is not subjected to the optimization process.
Therefore, in this embodiment, if the total time length of the target frame is too long, in order to further reduce the viewing time of the user on the premise of not seriously affecting the quality of the brocade aggregate content, the priority of each video frame in the target frame is determined, and then the video frames with lower priorities are eliminated, so that the target frame with too long time length is subjected to slimming processing, and the video frames with higher priorities are retained, so that the quality of the brocade aggregate content is not seriously affected.
Fig. 11 is another specific game video clipping method provided in the embodiment of the present application. Referring to fig. 11, the game video clipping method may include the steps of:
and step S51, acquiring the game video to be clipped, and determining the target object in the game video to be clipped.
And step S52, performing frame extraction on the game video to be clipped to obtain a corresponding frame to be detected.
In this embodiment, as to the specific processes of steps S51 to S52, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
And step S53, detecting the gradient characteristic difference, the brightness characteristic difference and the hue characteristic difference between the regional pictures of the target object on the two frames of frames to be detected.
For the detection process of the gradient feature difference, the luminance feature difference, and the hue feature difference, reference may be made to the corresponding contents disclosed in the foregoing embodiments, which are not described herein again.
And step S54, if any one of the gradient feature difference, the brightness feature difference and the hue feature difference is larger than a corresponding difference threshold, determining the picture scene type of the corresponding frame to be detected as a preset type.
In this embodiment, the preset type is a battle scene. If it is determined whether the scene type of the corresponding frame to be detected is a battle scene based on only the gradient feature difference, it may cause those battle scenes using skills to be misjudged as non-battle scenes. Similarly, if it is determined whether the picture scene type of the corresponding frame to be detected is a battle scene based on the brightness characteristic difference, it may cause that the battle scene in which a common attack is performed or the battle scene in which a skill causing only a hue change is used is misjudged as a non-battle scene. In addition, if it is determined whether the picture scene type of the corresponding frame to be detected is a battle scene based on only the hue characteristic difference, it may cause that those battle scenes in which a general attack is performed or battle scenes in which a skill causing only a brightness change is used are erroneously determined as non-battle scenes. Therefore, the embodiment of the application simultaneously adopts the gradient feature difference, the brightness feature difference and the hue feature difference to determine whether the picture scene type of the corresponding frame to be detected is a battle scene.
Specifically, in this embodiment, the difference threshold corresponding to the gradient feature difference is set to 20, the difference threshold corresponding to the brightness feature difference is set to 20, and the difference threshold corresponding to the hue feature difference is set to 12, so that the following variables can be used to represent whether the scene where the hero character is located is a battle scene:
B=(DG>δG)|(DV>δV)|(DH>δH);
in the formula, DGRepresenting the difference in the characteristics of the gradient, DVIndicating a difference in luminance characteristics, DHIndicating the difference in hue characteristics, δGA difference threshold value representing the difference of the gradient characteristics, wherein the specific value is 20 and deltaVA difference threshold value representing the difference of brightness characteristics, wherein the specific value is 20 and deltaHThe specific numerical value of the difference threshold value indicating the difference in hue characteristics is 12, and "|" in the formula represents an operator. In this embodiment, when the value of B is 1, it represents that the corresponding target object is in a fighting state, and when the value of B is 0, it represents that the corresponding target object is in a non-fighting state.
And step S55, determining a target frame in the game video to be clipped based on the frame to be detected with the same picture scene type as the preset type.
Specifically, if frame extraction is performed only once in step S52, if the picture scene types corresponding to the two frames to be detected obtained by frame extraction at this time are the same as the preset type, the target frame is determined based on the two frames to be detected. If the frame extraction is performed for multiple times in step S52, the target frame is determined based on each group of frames to be detected whose picture scene type obtained in the multiple frame extraction event is the same as the preset type.
And step S56, intercepting the target frame from the game video to be clipped to obtain a game video brocade set.
Therefore, in the embodiment, the gradient feature difference, the brightness feature difference and the hue feature difference are simultaneously adopted to determine whether the picture scene type of the corresponding frame to be detected is the battle scene, so that the occurrence of misjudgment accidents of the battle scene caused by only adopting one or two of the gradient feature difference, the brightness feature difference and the hue feature difference is avoided, the reliability and the accuracy of battle scene detection are improved, and the precision of a battle picture collection is improved.
The technical scheme in the application is more suitable for being applied to the clipping of MOBA game video (i.e. multiplayer online Battle Arena). The present invention will be described below with reference to a MOBA game, which is a game of glory queen people at present.
The game editing platform obtains a total finals video of the spring season of the Royal vocational tournament of the Wang in 2019 as a game video to be edited, matches corresponding initial roles from the total finals video by utilizing a template matching algorithm based on a preset blood bar outline template, and then selects a role with blue filling color inside the blood bar from the initial roles as a target hero role. And (4) extracting frames of the total finasteride video, wherein the frames are extracted once every 5 seconds, and two continuous frames of video frames are extracted as frames to be detected every time, so that a plurality of groups of frames to be detected are obtained. And respectively detecting the gradient characteristic difference, the brightness characteristic difference and the hue characteristic difference between the regional pictures of the target hero character on the two frames of the frames to be detected in each group of frames to be detected to obtain a plurality of groups of image characteristic differences. For any group of image feature differences, if the gradient feature difference corresponding to at least one hero character is greater than 20, or the brightness feature difference is greater than 20, or the hue feature difference is greater than 12, the image scene type of the frame to be detected corresponding to the group of image feature differences is judged as a battle scene, otherwise, the image scene type of the frame to be detected corresponding to the group of image feature differences is judged as a non-battle scene, and therefore a plurality of image scene types are obtained. Clustering the plurality of picture scene types by using a DBSCAN algorithm, then determining a target frame in the final vote video based on the cluster of which the picture scene type in the clustering result is a battle scene, finally intercepting the target frame from the final vote video to obtain a game video brocade set of the final vote related to the spring season of the queen preoccupation of the queen preoccupation 2019, and sharing the game video brocade set to a new media platform so that a user of the new media platform can view the game video brocade set related to the final vote of the spring season of the queen preoccupation preoccu.
It can be seen from the above process that, in this embodiment, the target hero role in the total finals video of the 2019 queen honor professional league spring season is determined to be located, the picture area where the target hero role is located is found, then the gradient feature, the brightness feature and the hue feature change condition of the picture area where the target hero role is located in the two frames before and after are directly detected, and whether the corresponding picture scene type is a battle scene can be directly determined according to the image feature change condition, so as to obtain the target frame to be edited subsequently. Since the time length of the total playoff video of the spring season of the professional league contest is very long in 2019, if the editing process is completely manually performed, a large amount of labor cost and time cost are inevitably consumed, and the editing accuracy is easily influenced by subjective factors. Therefore, the editing scheme in the embodiment can effectively reduce labor cost and time cost, and improves the accuracy of game video editing.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a game video clipping device according to an embodiment of the present application, where the game video clipping device includes:
the video acquisition module 11 is used for acquiring a game video to be edited;
an object determining module 12, configured to determine a target object in the game video to be clipped;
the video frame extracting module 13 is configured to extract a frame from the game video to be edited to obtain a corresponding frame to be detected;
a difference detection module 14, configured to detect an image feature difference between area pictures of the target object on different frames to be detected;
a target frame determining module 15, configured to determine, based on the image feature difference, a target frame in the game video to be clipped, which meets a condition;
and the video intercepting module 16 is used for intercepting the target frame from the game video to be edited so as to obtain a game video collection.
Therefore, in the embodiment, the target object in the game video to be edited is determined first to locate the target object, the picture area where the target object is located is found, then the image characteristic change condition of the picture area where the hero character is located in the front frame and the back frame is directly detected, and whether the corresponding picture scene type is a battle scene or not can be directly judged according to the image characteristic change condition, so that the target frame to be edited subsequently is obtained. Compared with the existing scheme based on manual clipping, the process can reduce the labor cost and the time cost and effectively improve the accuracy of game video clipping. Compared with a supervision learning detection scheme based on a deep neural network model, the embodiment does not need to collect massive samples of hero states of massive hero characters under different skin appearance modules, different orientations, different skills and common attack behaviors from massive game videos in advance, and does not need to use massive training samples to perform complicated model training, so that the use cost of the scheme can be greatly reduced.
In some embodiments, the object determining module 12 may specifically include:
the template matching unit is used for acquiring a preset image template and determining image elements corresponding to the preset image template from the game video to be edited by utilizing a template matching algorithm;
the area determining unit is used for determining a target picture area by using the picture area where the image element is located and a predetermined position relation; the position relation is the position relation between the picture area where the image element is located and the picture area where the target object is located;
a first object determination unit for determining an object located on the target screen area as a target object.
In some embodiments, the object determining module 12 may specifically include:
the interface creating unit is used for creating an area selection interface on the human-computer interaction interface;
the information acquisition unit is used for acquiring the area selection information through the area selection interface;
the area selection unit is used for selecting a target picture area in the game video to be edited according to the area selection information;
a second object determination unit configured to determine an object located on the target screen area as a target object.
In some embodiments, the video frame extracting module 13 is specifically configured to perform frame extraction on the game video to be edited once to obtain two frames to be detected; correspondingly, the difference detecting module 14 is specifically configured to detect an image feature difference between the region frames of the target object on the two frames to be detected.
In some embodiments, the target frame determining module 15 specifically includes:
the first type determining unit is used for determining the picture scene type corresponding to the corresponding frame to be detected by utilizing the image characteristic difference;
and the first target frame determining unit is used for judging whether the picture scene type is consistent with a preset type or not, and if so, determining a target frame in the game video to be clipped based on the two frames of frames to be detected.
In some embodiments, the video frame extracting module 13 is specifically configured to perform frame extraction on the game video to be edited for multiple times based on a preset frame extracting principle to obtain multiple groups of frames to be detected; the preset frame extracting principle comprises a time interval of every two adjacent frame extracting events and two frames of extracted frames in each frame extracting event; correspondingly, the difference detecting module 14 is specifically configured to detect image feature differences between the region pictures of the target object on two frames of each group of frames to be detected, so as to obtain multiple groups of image feature differences.
In some embodiments, the target frame determining module 15 specifically includes:
the second type determining unit is used for determining the picture scene type corresponding to the corresponding frame to be detected by using the characteristic difference of each group of images to obtain a plurality of picture scene types;
the clustering unit is used for clustering the plurality of picture scene types by using a DBSCAN algorithm;
and the second target frame determining unit is used for determining a target frame in the game video to be edited based on the cluster of which the picture scene type in the clustering result is consistent with the preset type.
In some embodiments, the game video clip device further comprises:
the time length counting module is used for counting the total time length of the target frame;
the video frame screening module is used for screening out video frames to be eliminated from the target frames based on the video frame priority when the total time length is greater than the preset time length;
and the video frame eliminating module is used for eliminating the video frame to be eliminated from the target frame so as to obtain the optimized target frame.
In some embodiments, the difference detecting module 14 specifically includes:
a first picture region determining unit for determining the target object on the first frame to be detected
A first picture area;
the second picture area determining unit is used for determining the first picture area as the picture area of the target object on a second frame to be detected so as to obtain a second picture area;
a difference detection unit for detecting an image feature difference between pictures on the first picture area and the second picture area.
In some embodiments, the difference detecting module 14 is specifically configured to detect a gradient feature difference, a brightness feature difference, and a hue feature difference between the region frames of the target object on the two frames to be detected.
In some embodiments, the target frame determining module 15 is specifically configured to determine, when any one of the gradient feature difference, the brightness feature difference, and the hue feature difference is greater than a corresponding difference threshold, a picture scene type of the corresponding frame to be detected as a preset type.
Further, the embodiment of the application also provides electronic equipment. FIG. 13 is a block diagram illustrating an electronic device 20 according to an exemplary embodiment, and nothing in the figure should be taken as a limitation on the scope of use of the present application.
Fig. 13 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. The memory 22 is configured to store a computer program, and the computer program is loaded and executed by the processor 21 to implement the relevant steps in the game video clipping method disclosed in any of the foregoing embodiments, where the computer program may be specifically written by Python. In addition, the electronic device 20 in this embodiment may specifically be a desktop computer, a handheld terminal, or a server. In this embodiment, the memory requirement for the electronic device 20 is more than 8 GB.
In this embodiment, the power supply 23 is configured to provide a working voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the memory 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., the resources stored thereon include an operating system 221, a computer program 222, data 223 including game videos, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device and the computer program 222 on the electronic device 20, so as to realize the operation and processing of the mass data 223 in the memory 22 by the processor 21, and may be Windows Server, Netware, Unix, Linux, and the like. The computer programs 222 may further include computer programs that can be used to perform other specific tasks in addition to the computer programs that can be used to perform the game video clipping method disclosed by any of the foregoing embodiments and executed by the electronic device 20. Data 223 may include various game video data collected by electronic device 20.
It should be further noted that the electronic device in this embodiment may be a blockchain node in a blockchain network, in addition to a node in a conventional distributed computer cluster.
Further, an embodiment of the present application also discloses a storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are loaded and executed by a processor, the steps of the game video clipping method disclosed in any of the foregoing embodiments are implemented.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing describes in detail a game video editing method, apparatus, device and storage medium provided by the present application, and specific examples are applied herein to illustrate the principles and implementations of the present application, and the descriptions of the foregoing examples are only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (14)

1. A game video clipping method, comprising:
acquiring a game video to be clipped, and determining a target object in the game video to be clipped;
extracting frames of the game video to be edited to obtain corresponding frames to be detected, and detecting image characteristic differences of the target object among regional pictures on different frames to be detected;
determining a target frame meeting conditions in the game video to be edited based on the image feature difference;
and intercepting the target frame from the game video to be edited to obtain a game video highlights set.
2. The game video clipping method of claim 1, wherein the determining a target object in the game video to be clipped comprises:
acquiring a preset image template, and determining image elements corresponding to the preset image template from the game video to be edited by utilizing a template matching algorithm;
determining a target picture area by using the picture area where the image element is located and a predetermined position relation; the position relation is the position relation between the picture area where the image element is located and the picture area where the target object is located;
and determining an object positioned on the target picture area as a target object.
3. The game video clipping method of claim 1, wherein the determining a target object in the game video to be clipped comprises:
creating an area selection interface on a human-computer interaction interface;
acquiring area selection information through the area selection interface;
selecting a target picture area in the game video to be edited according to the area selection information;
and determining an object positioned on the target picture area as a target object.
4. The game video clipping method according to claim 1, wherein the framing the game video to be clipped to obtain corresponding frames to be detected, and detecting the image feature difference between the area pictures of the target object on different frames to be detected comprises:
performing frame extraction on the game video to be edited once to obtain two frames to be detected;
and detecting the image characteristic difference of the target object between the area pictures of the two frames to be detected.
5. The game video clipping method according to claim 4, wherein the determining a target frame satisfying a condition in the game video to be clipped based on the image feature difference comprises:
determining the picture scene type corresponding to the corresponding frame to be detected by using the image characteristic difference;
and judging whether the picture scene type is consistent with a preset type or not, and if so, determining a target frame in the game video to be clipped based on the two frames of frames to be detected.
6. The game video clipping method according to claim 1, wherein the framing the game video to be clipped to obtain corresponding frames to be detected, and detecting the image feature difference between the area pictures of the target object on different frames to be detected comprises:
performing frame extraction on the game video to be edited for multiple times based on a preset frame extraction principle to obtain multiple groups of frames to be detected; the preset frame extracting principle comprises a time interval of every two adjacent frame extracting events and two frames of extracted frames in each frame extracting event;
and respectively detecting the image characteristic difference of the target object between the area pictures of the two frames to be detected of each group of frames to be detected to obtain a plurality of groups of image characteristic differences.
7. The game video clipping method according to claim 6, wherein the determining a target frame satisfying a condition in the game video to be clipped based on the image feature difference comprises:
determining the picture scene type corresponding to the corresponding frame to be detected by using the characteristic difference of each group of images respectively to obtain a plurality of picture scene types;
clustering the multiple picture scene types by using a DBSCAN algorithm;
and determining a target frame in the game video to be edited based on the cluster of which the picture scene type in the clustering result is consistent with the preset type.
8. The game video clipping method according to claim 1, wherein the determining a target frame satisfying a condition in the game video to be clipped based on the image feature difference further comprises:
counting the total time length of the target frame;
if the total time length is greater than the preset time length, screening out video frames to be eliminated from the target frames based on the video frame priority;
and removing the video frame to be removed from the target frame to obtain an optimized target frame.
9. A game video clipping method according to any one of claims 4 to 7, wherein detecting an image feature difference between regional pictures of the target object over two frames to be detected comprises:
determining a first picture area of the target object on a first frame to be detected;
determining the first picture area as a picture area of the target object on a second frame to be detected to obtain a second picture area;
detecting an image feature difference between pictures on the first picture area and the second picture area.
10. The game video clipping method according to claim 5 or 7, wherein detecting the difference in image characteristics between the region pictures of the target object over two frames to be detected comprises:
and detecting gradient characteristic difference, brightness characteristic difference and hue characteristic difference between the regional pictures of the target object on the two frames of frames to be detected.
11. The method for clipping game video according to claim 10, wherein determining the scene type of the picture corresponding to the frame to be detected using the image feature difference comprises:
and if any one of the gradient feature difference, the brightness feature difference and the hue feature difference is larger than a corresponding difference threshold value, determining the picture scene type of the corresponding frame to be detected as a preset type.
12. A game video clip apparatus, comprising:
the video acquisition module is used for acquiring a game video to be edited;
the object determining module is used for determining a target object in the game video to be clipped;
the video frame extracting module is used for extracting frames of the game video to be edited so as to obtain corresponding frames to be detected;
the difference detection module is used for detecting the image characteristic difference of the target object between the area pictures on different frames to be detected;
the target frame determining module is used for determining a target frame meeting conditions in the game video to be edited based on the image feature difference;
and the video intercepting module is used for intercepting the target frame from the game video to be edited so as to obtain a game video brocade set.
13. An electronic device, comprising a processor and a memory; wherein the memory is for storing a computer program that is loaded and executed by the processor to implement a game video clip method as claimed in any one of claims 1 to 11.
14. A storage medium having stored thereon computer-executable instructions which, when loaded and executed by a processor, carry out a game video clipping method according to any one of claims 1 to 11.
CN201911252604.4A 2019-12-09 2019-12-09 Game video editing method, device, equipment and storage medium Active CN111013150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911252604.4A CN111013150B (en) 2019-12-09 2019-12-09 Game video editing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911252604.4A CN111013150B (en) 2019-12-09 2019-12-09 Game video editing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111013150A true CN111013150A (en) 2020-04-17
CN111013150B CN111013150B (en) 2020-12-18

Family

ID=70205031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911252604.4A Active CN111013150B (en) 2019-12-09 2019-12-09 Game video editing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111013150B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111741331A (en) * 2020-08-07 2020-10-02 北京美摄网络科技有限公司 Video clip processing method, device, storage medium and equipment
CN112087661A (en) * 2020-08-25 2020-12-15 腾讯科技(上海)有限公司 Video collection generation method, device, equipment and storage medium
CN112333537A (en) * 2020-07-27 2021-02-05 深圳Tcl新技术有限公司 Video integration method and device and computer readable storage medium
CN112380390A (en) * 2020-08-31 2021-02-19 北京字节跳动网络技术有限公司 Video processing method and device
CN113312967A (en) * 2021-04-22 2021-08-27 北京搜狗科技发展有限公司 Detection method, device and device for detection
CN113542865A (en) * 2020-12-25 2021-10-22 腾讯科技(深圳)有限公司 Video editing method, device and storage medium
CN113596598A (en) * 2021-07-22 2021-11-02 网易(杭州)网络有限公司 Game information processing method, device, equipment and storage medium
CN114079804A (en) * 2020-08-13 2022-02-22 北京达佳互联信息技术有限公司 Multimedia resource detection method, device, terminal and storage medium
CN114422851A (en) * 2022-01-24 2022-04-29 腾讯科技(深圳)有限公司 Video clipping method, video clipping device, electronic equipment and readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101909161A (en) * 2009-12-17 2010-12-08 新奥特(北京)视频技术有限公司 Video clipping method and device
JP5701707B2 (en) * 2011-07-25 2015-04-15 株式会社ソニー・コンピュータエンタテインメント Moving image photographing apparatus, information processing system, information processing apparatus, and image data processing method
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 Video editing method and electronic equipment
CN108259990A (en) * 2018-01-26 2018-07-06 腾讯科技(深圳)有限公司 A kind of method and device of video clipping
CN110505519A (en) * 2019-08-14 2019-11-26 咪咕文化科技有限公司 Video editing method, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101909161A (en) * 2009-12-17 2010-12-08 新奥特(北京)视频技术有限公司 Video clipping method and device
JP5701707B2 (en) * 2011-07-25 2015-04-15 株式会社ソニー・コンピュータエンタテインメント Moving image photographing apparatus, information processing system, information processing apparatus, and image data processing method
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 Video editing method and electronic equipment
CN108259990A (en) * 2018-01-26 2018-07-06 腾讯科技(深圳)有限公司 A kind of method and device of video clipping
CN110505519A (en) * 2019-08-14 2019-11-26 咪咕文化科技有限公司 Video editing method, electronic equipment and storage medium

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112333537A (en) * 2020-07-27 2021-02-05 深圳Tcl新技术有限公司 Video integration method and device and computer readable storage medium
CN112333537B (en) * 2020-07-27 2023-12-05 深圳Tcl新技术有限公司 Video integration method, device and computer readable storage medium
CN111741331B (en) * 2020-08-07 2020-12-22 北京美摄网络科技有限公司 Video clip processing method, device, storage medium and equipment
CN111741331A (en) * 2020-08-07 2020-10-02 北京美摄网络科技有限公司 Video clip processing method, device, storage medium and equipment
CN114079804B (en) * 2020-08-13 2024-03-26 北京达佳互联信息技术有限公司 Method, device, terminal and storage medium for detecting multimedia resources
CN114079804A (en) * 2020-08-13 2022-02-22 北京达佳互联信息技术有限公司 Multimedia resource detection method, device, terminal and storage medium
CN112087661B (en) * 2020-08-25 2022-07-22 腾讯科技(上海)有限公司 Video collection generation method, device, equipment and storage medium
CN112087661A (en) * 2020-08-25 2020-12-15 腾讯科技(上海)有限公司 Video collection generation method, device, equipment and storage medium
CN112380390A (en) * 2020-08-31 2021-02-19 北京字节跳动网络技术有限公司 Video processing method and device
CN113542865A (en) * 2020-12-25 2021-10-22 腾讯科技(深圳)有限公司 Video editing method, device and storage medium
CN113312967A (en) * 2021-04-22 2021-08-27 北京搜狗科技发展有限公司 Detection method, device and device for detection
CN113312967B (en) * 2021-04-22 2024-05-24 北京搜狗科技发展有限公司 Detection method and device for detection
CN113596598A (en) * 2021-07-22 2021-11-02 网易(杭州)网络有限公司 Game information processing method, device, equipment and storage medium
CN114422851A (en) * 2022-01-24 2022-04-29 腾讯科技(深圳)有限公司 Video clipping method, video clipping device, electronic equipment and readable medium
CN114422851B (en) * 2022-01-24 2023-05-16 腾讯科技(深圳)有限公司 Video editing method, device, electronic equipment and readable medium

Also Published As

Publication number Publication date
CN111013150B (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN111013150B (en) Game video editing method, device, equipment and storage medium
CN112565825B (en) Video data processing method, device, equipment and medium
CN108833936B (en) Live broadcast room information pushing method, device, server and medium
CN102016877B (en) Methods for capturing depth data of a scene and applying computer actions
Taylor et al. Ovvv: Using virtual worlds to design and evaluate surveillance systems
US8195038B2 (en) Brief and high-interest video summary generation
US7120873B2 (en) Summarization of sumo video content
CN107515825B (en) Fluency testing method and device, storage medium and terminal
US20180137363A1 (en) System for the automated analisys of a sporting match
US7184593B2 (en) Method and apparatus for detecting local features of video, and recording medium storing the method
EP3800594A1 (en) Apparatus and method for generating a recording
CN115396705B (en) Screen operation verification method, platform and system
CN109408672A (en) A kind of article generation method, device, server and storage medium
CN108063915A (en) A kind of image-pickup method and system
CN112492346A (en) Method for determining wonderful moment in game video and playing method of game video
CN113824983A (en) Data matching method, device, equipment and computer readable storage medium
CN111491179B (en) Game video editing method and device
CN105979331A (en) Smart television data recommend method and device
CN113259727A (en) Video recommendation method, video recommendation device and computer-readable storage medium
CN111768729A (en) VR scene automatic explanation method, system and storage medium
CN108986056A (en) Content requirements judge system
JP2016048852A (en) Determination program, method, and device
CN111565300B (en) Object-based video file processing method, device and system
CN115175005A (en) Video processing method and device, electronic equipment and storage medium
JP2003143546A (en) Method for processing football video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40021124

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant