CN110087145A - Method and apparatus for handling video - Google Patents

Method and apparatus for handling video Download PDF

Info

Publication number
CN110087145A
CN110087145A CN201910492581.8A CN201910492581A CN110087145A CN 110087145 A CN110087145 A CN 110087145A CN 201910492581 A CN201910492581 A CN 201910492581A CN 110087145 A CN110087145 A CN 110087145A
Authority
CN
China
Prior art keywords
video
mass value
cutting
time point
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910492581.8A
Other languages
Chinese (zh)
Other versions
CN110087145B (en
Inventor
陈奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910492581.8A priority Critical patent/CN110087145B/en
Publication of CN110087145A publication Critical patent/CN110087145A/en
Application granted granted Critical
Publication of CN110087145B publication Critical patent/CN110087145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Image Analysis (AREA)

Abstract

Embodiment of the disclosure discloses the method and apparatus for handling video.One specific embodiment of this method includes: acquisition target video;Generate the quality value sequence of target video, wherein the element in quality value sequence is used to characterize time point and mass value corresponding with the time point in the time shaft of target video, and mass value is used to characterize the quality of the video frame at the time point in time shaft;In response to determining that quality value sequence includes at least two local maximums, two adjacent local maximums are chosen from least two local maximums;The smallest mass value is determined from mass value in quality value sequence, between selected adjacent two local maximum;The time point based on corresponding to identified the smallest mass value carries out cutting to target video, obtains cutting rear video.The embodiment can carry out cutting to video according to different scenes;And it is possible to improve the efficiency of video slicing.

Description

Method and apparatus for handling video
Technical field
Embodiment of the disclosure is related to field of computer technology, more particularly, to handles the method and apparatus of video.
Background technique
Video slicing refers to the operation by a video slicing at least two video clips, can be applied to video and cuts It collects, video is gone in the scenes such as advertisement.
The usually manual designated dicing position of video slicing approach in the prior art, so it is soft using video processing Part (such as video production software, video clipping software etc.) carries out cutting to video.
Summary of the invention
Embodiment of the disclosure proposes the method and apparatus for handling video.
In a first aspect, embodiment of the disclosure provides a kind of method for handling video, this method comprises: obtaining mesh Mark video;Generate the quality value sequence of target video, wherein the element in quality value sequence is used to characterize the time of target video Time point and mass value corresponding with the time point in axis, mass value are used to characterize the video at the time point in time shaft The quality of frame, and the element in quality value sequence is arranged according to the sequencing of time;Include in response to determining quality value sequence At least two local maximums choose two adjacent local maximums, wherein mass value from least two local maximums Mass value in sequence, adjacent with local maximum is less than local maximum;From it is in quality value sequence, positioned at selected The smallest mass value is determined in mass value between two adjacent local maximums;Based on identified the smallest mass value institute Corresponding time point carries out cutting to target video, obtains cutting rear video.
In some embodiments, after the quality value sequence for generating target video, this method further include: be based on mass value Element in sequence is generated using the time as abscissa, using mass value as the line graph of ordinate;And based on identified minimum Mass value corresponding to time point, cutting is carried out to target video, obtains cutting rear video, comprising: is determined comprising the smallest The object time range at time point corresponding to mass value, wherein object time range is contained in selected adjacent two Time range composed by local maximum corresponding time point;Target point is determined from the lines of line graph, wherein mesh Mass value corresponding to punctuate is the smallest mass value within the object time;Based on the time corresponding to target point, to mesh It marks video and carries out cutting, obtain cutting rear video.
In some embodiments, the quality value sequence for generating target video includes: for the time corresponding to target video Time point in axis executes following steps: destination number video frame is chosen from the video frame in the time point;It is true respectively The video frame mass value of fixed selected destination number video frame;Based on identified destination number video frame mass value, Determine mass value corresponding to the time point;According to the sequencing at corresponding time point, identified mass value is carried out Arrangement generates quality value sequence.
In some embodiments, the video frame mass value for determining selected destination number video frame respectively includes: pair Video frame in selected destination number video frame executes following steps: Laplace transform is carried out to the video frame, Obtain transformation rear video frame;Determine the variance of the pixel value of the pixel in transformation rear video frame;Identified variance is determined For the video frame mass value of the video frame.
In some embodiments, chosen from least two local maximums two adjacent local maximums include: from Two local maximas that adjacent and corresponding difference is more than or equal to preset threshold are chosen at least two local maximums Value.
In some embodiments, this method further include: for the cutting rear video in cutting rear video obtained, obtain It is associated for characterizing the label of the video features of the cutting rear video, and to the cutting rear video and acquired label Storage.
In some embodiments, this method further include: the video that the user terminal in response to receiving communication connection is sent Acquisition request executes following steps: based on label corresponding to cutting rear video obtained, from cutting rear video obtained It is middle to choose cutting rear video corresponding with video acquisition request;Selected cutting rear video is sent to user terminal, with And control user terminal plays out selected cutting rear video.
Second aspect, embodiment of the disclosure provide it is a kind of for handling the device of video, the device include: obtain it is single Member is configured to obtain target video;First generation unit is configured to generate the quality value sequence of target video, wherein matter Element in magnitude sequence is used to characterize time point and mass value corresponding with the time point in the time shaft of target video, matter Magnitude is used to characterize the quality of the video frame at the time point in time shaft, and the element in quality value sequence is according to the time Sequencing arrangement;Selection unit, be configured in response to determine quality value sequence include at least two local maximums, to Choose two adjacent local maximums in few two local maximums, wherein it is in quality value sequence, with local maximum phase Adjacent mass value is less than local maximum;Determination unit, be configured to it is from quality value sequence, positioned at selected adjacent The smallest mass value is determined in mass value between two local maximums;Cutting unit is configured to based on determined by most Time point corresponding to small mass value carries out cutting to target video, obtains cutting rear video.
In some embodiments, device further include: the second generation unit is configured to based on the member in quality value sequence Element is generated using the time as abscissa, using mass value as the line graph of ordinate;And cutting unit includes: the first determining module, It is configured to determine the object time range comprising time point corresponding to the smallest mass value, wherein object time range packet Contained in time range composed by selected adjacent two local maximum corresponding time point;Second determines mould Block is configured to determine target point from the lines of line graph, wherein mass value corresponding to target point is in object time range Interior is the smallest mass value;Cutting module, was configured to based on the time corresponding to target point, carried out cutting to target video, Obtain cutting rear video.
In some embodiments, the first generation unit includes: execution module, is configured to for corresponding to target video Time point in time shaft executes following steps: destination number video frame is chosen from the video frame in the time point;Point The video frame mass value of destination number video frame that Que Ding be not selected;Based on identified destination number video frame quality Value, determines mass value corresponding to the time point;Generation module is configured to the sequencing according to corresponding time point, Identified mass value is arranged, quality value sequence is generated.
In some embodiments, execution module is further configured to: in selected destination number video frame Video frame, execute following steps: to the video frame carry out Laplace transform, obtain transformation rear video frame;After determining transformation The variance of the pixel value of pixel in video frame;Identified variance is determined as to the video frame mass value of the video frame.
In some embodiments, selection unit is further configured to: being chosen from least two local maximums adjacent And corresponding difference be more than or equal to two local maximums of preset threshold.
In some embodiments, device further include: storage unit is configured in cutting rear video obtained Cutting rear video, obtain the label for characterizing the video features of the cutting rear video, and to the cutting rear video and institute The label of acquisition is associated storage.
In some embodiments, device further include: transmission unit is configured in response to receive the use of communication connection The video acquisition request that family terminal is sent, executes following steps: based on label corresponding to cutting rear video obtained, from institute Cutting rear video corresponding with video acquisition request is chosen in the cutting rear video of acquisition;Selected cutting backsight is taken place frequently User terminal is given, and control user terminal plays out selected cutting rear video.
The third aspect, embodiment of the disclosure provide a kind of electronic equipment, comprising: one or more processors;Storage Device is stored thereon with one or more programs, when one or more programs are executed by one or more processors, so that one Or the method that multiple processors realize any embodiment in the above-mentioned method for handling video.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program, The program realizes any embodiment in the above-mentioned method for handling video method when being executed by processor.
The method and apparatus for handling video that embodiment of the disclosure provides then are given birth to by obtaining target video At the quality value sequence of target video, wherein element in quality value sequence be used to characterize in the time shaft of target video when Between put and mass value corresponding with the time point, mass value be used for characterize be in time shaft in time point video frame matter Amount, and the element in quality value sequence is arranged according to the sequencing of time, then in response to determining that quality value sequence includes extremely Few two local maximums, choose two adjacent local maximums, wherein mass value sequence from least two local maximums Mass values in column, adjacent with local maximum are less than local maximum, then from it is in quality value sequence, be located at it is selected Adjacent two local maximum between mass value in determine the smallest mass value, finally based on identified the smallest matter Time point corresponding to magnitude carries out cutting to target video, cutting rear video is obtained, so as to the quality based on video frame Cutting is carried out to video, during shooting video, the quality that the process of scene switching normally results in video frame is reduced, So ropy time point, can be according to different scenes to video progress cutting during the mass change based on video frame Cutting is carried out to video, and then convenient for the subsequent video storage based on different scenes and video extraction;Also, compared to existing skill The method for artificially carrying out cutting to video in art carries out cutting to video based on the quality of video frame, video can be improved The efficiency of cutting.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for handling video of the disclosure;
Fig. 3 is according to an embodiment of the present disclosure for handling the schematic diagram of an application scenarios of the method for video;
Fig. 4 is the flow chart according to another embodiment of the method for handling video of the disclosure;
Fig. 5 is the disclosure for handling the schematic diagram of the line graph generated in one embodiment of the method for video;
Fig. 6 is the structural schematic diagram according to one embodiment of the device for handling video of the disclosure;
Fig. 7 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for handling video of the disclosure or the implementation of the device for handling video The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send message etc..Various telecommunication customer end applications can be installed, such as video processing class is answered on terminal device 101,102,103 With, web browser applications, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard When part, it can be the various electronic equipments with camera, including but not limited to smart phone, tablet computer, e-book reading (Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with To provide the multiple softwares or software module of Distributed Services), single software or software module also may be implemented into.It does not do herein It is specific to limit.
Server 105 can be to provide the server of various services, such as shoot and obtain to terminal device 101,102,103 The video processing service device that is handled of target video.Video processing service device can be to data such as the target videos received It carries out the processing such as analyzing, and obtains processing result (such as cutting rear video).
It should be noted that can be by terminal device for handling the method for video provided by embodiment of the disclosure 101, it 102,103 executes, can also be executed by server 105, correspondingly, the device for handling video can be set in terminal In equipment 101,102,103, also it can be set in server 105.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module) It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.Used number during generating cutting rear video According to not needing in the case where long-range obtain, above system framework can not include network, and only include terminal device or service Device.
With continued reference to Fig. 2, the process of one embodiment of the method for handling video according to the disclosure is shown 200.The method for being used to handle video, comprising the following steps:
Step 201, target video is obtained.
In the present embodiment, can lead to for handling the executing subject (such as server shown in FIG. 1) of the method for video Wired connection mode or radio connection are crossed from remotely-or locally obtaining target video.Wherein, target video is to it The video handled.
In practice, target video can be the video that is shot and be obtained to actual scene.
Step 202, the quality value sequence of target video is generated.
In the present embodiment, based on target video obtained in step 201, target video is can be generated in above-mentioned executing subject Quality value sequence.Wherein, the element in quality value sequence be used to characterize time point in the time shaft of target video and with this Time point corresponding mass value.
Specifically, the time shaft of target video can be used for connecting to video frame according to the sequencing of time, obtain To target video.It is appreciated that while shooting obtains target video, it can determine the time shaft of target video.Practice In, the minimum unit on time shaft can be 1 second, i.e., be divided between two neighboring time point 1 second on time shaft.As an example, The starting point of time shaft can be 00:00 (0 point and 0 second);First time point can be 00:01 (0 point and 01 second);Second time point It can be 00:02 (0 point and 02 second);Third time point can be 00:03 (0 point and 03 second) ....
In the present embodiment, the element in quality value sequence is arranged according to the sequencing of time.In quality value sequence Time point corresponding to element can be continuous time point on time shaft, or at discrete time point, not limit herein System.Specifically, time point corresponding to element in quality value sequence can be predetermined for technical staff, on a timeline Time point in predeterminated position, or whole time point on time shaft, or may be random from time shaft Determining time point.
In the present embodiment, mass value is used to characterize the quality of the video frame at the time point in time shaft.Specifically, The video frame at the time point in time shaft can be at the time point and in time shaft, adjacent with the time point be next The video frame that shooting obtains in period composed by time point.For example, being in the time for " 00:02 " this time point The video frame of point can be the video frame of the shooting acquisition within " 00:02-00:03 " this period.
In the present embodiment, mass value corresponding to the time point in quality value sequence can be by being in the time point The quality of video frame determines.Specifically, as an example, for time point corresponding to the element in quality value sequence, it is above-mentioned to hold Row main body can obtain the video frame in the time point first, then determine the video frame quality of acquired each video frame Value finally carries out mean operation to identified video frame mass value, obtains operation result as matter corresponding to the time point Magnitude.Here, video frame mass value can be used for characterizing the quality of video frame.Specifically, video frame mass value is bigger, it can be with table The quality for levying video frame is more excellent.
Here it is possible to the video frame mass value of video frame be determined using various methods, for example, can use existing image Quality Identification model determines the video frame mass value of video frame.
In some optional implementations of the present embodiment, above-mentioned executing subject can generate target by following steps The quality value sequence of video: firstly, for the time point in time shaft corresponding to target video, above-mentioned executing subject can be held Row following steps: destination number video frame is chosen from the video frame in the time point;Selected target is determined respectively The video frame mass value of quantity video frame;Based on identified destination number video frame mass value, the time point institute is determined Corresponding mass value.Then, above-mentioned executing subject can be according to the sequencing at corresponding time point, to identified quality Value is arranged, and quality value sequence is generated.
In this implementation, destination number can be predetermined quantity, or based in the time point Video frame quantity determine quantity (for example, in the time point video frame quantity half).Specifically , above-mentioned executing subject can choose destination number video frame using various methods, such as can be using the side randomly selected Formula is chosen, alternatively, can choose according to the sequencing of time.
Herein, above-mentioned executing subject can determine the video frame quality of video frame using above-mentioned picture quality identification model Value.In addition, in some optional implementations of the present embodiment, for the video in selected destination number video frame Following steps can also be performed in frame, above-mentioned executing subject: carrying out Laplace transform to the video frame, obtain transformation rear video Frame;Determine the variance of the pixel value of the pixel in transformation rear video frame;Identified variance is determined as to the view of the video frame Frequency frame mass value.
In practice, the region flat for image (region of the gray value without transformation), the image after Laplace transform exists The intensity value in the region is 0.At the region that gray value of image acutely converts (edge), the intensity of the image after Laplace transform It is relatively large to be worth (absolute value).Therefore, Laplace transform is commonly used for the marginal information of detection image.It in turn, can be with Understand, for the video frame (transformation rear video frame) of edge enhancing, variance is bigger, and the edge that can characterize video frame is more clear, And then the quality that can characterize video frame is more excellent.
In this implementation, above-mentioned executing subject can be adopted based on identified destination number video frame mass value With various methods determine the time point corresponding to mass value.For example, can be to identified destination number video frame quality Value carries out mean operation, obtains operation result as mass value corresponding to the time point;Alternatively, destination number can be regarded Maximum video frame mass value is determined as mass value corresponding to the time point in frequency frame mass value.
Specifically, for mass value corresponding to identified time point, above-mentioned executing subject can according to time point by Sequence after arriving first arranges identified mass value, generates quality value sequence;Alternatively, can according to time point after to Preceding sequence arranges identified mass value, generates quality value sequence.
Step 203, in response to determining that quality value sequence includes at least two local maximums, from least two local maximas Two adjacent local maximums are chosen in value.
In the present embodiment, based on the quality value sequence obtained in step 202, above-mentioned executing subject can be in response to determination Quality value sequence includes at least two local maximums, and two adjacent local maximas are chosen from least two local maximums Value.Wherein, in quality value sequence, adjacent with local maximum mass value be less than local maximum.
It is appreciated that when being shot using camera, if the target of expectation shooting changes, in order to continue to shoot New target, may move camera, so that target is contained in coverage.In this case, the movement of camera Meeting is so that the clarity for the video frame that shooting obtains is low, correspondingly, so that the mass value that shooting obtains video frame is small.When target packet When contained in coverage, camera then may be at relatively steady state, at this point, the clarity for the video frame that shooting obtains Height, correspondingly, the mass value that shooting obtains video frame are big.And when needing handoff scenario, when shooting new target, then need again Dollying head in this case, shoots the clear of the video frame of acquisition so that the fresh target of switching is contained in coverage Clear degree can decline, and correspondingly, the mass value that shooting obtains video frame reduces.When the fresh target of switching is contained in coverage When, camera can then be again at relatively steady state, and the clarity for shooting the video frame of acquisition can increase again, accordingly Ground, the mass value that shooting obtains video frame can become larger.
In turn, in the present embodiment, local maximum can correspond to relatively stable at camera in shooting process State, and quality value sequence include at least two local maximums, camera in shooting process can be characterized and occurred at least Variation from moving condition to metastable state twice can characterize in turn and scene switching occurred.Also, adjacent two The change procedure of mass value between a local maximum can correspond to the process of scene switching.
Particularly, since local maximum usually corresponds to the metastable state that camera is in shooting process, Shooting initial phase and shooting ending phase do not correspond to local maximum usually, and in turn, in the present embodiment, local maximum can Think and be corresponding with two adjacent mass values in quality value sequence, and is greater than the quality of two corresponding, adjacent mass values Value.
Herein, above-mentioned executing subject can choose two adjacent local maximums using various methods.Specifically, working as When only including two local maximums at least two local maximums that quality value sequence includes, above-mentioned executing subject can be straight Selecting takes the two local maximums.When at least two local maximums include at least three local maximums, at least Every two adjacent local maximum in three local maximums, above-mentioned executing subject choose the two adjacent local maximas Value composition local maximum group.
As an example, quality value sequence is " 3;8;2;6;4;5;7;1 ", wherein " 2 " and " 3 " adjacent with " 8 " are respectively less than " 8 ", so " 8 " are local maximum;" 4 " and " 2 " adjacent with " 6 " are respectively less than " 6 ", so " 6 " are local maximum;With " 7 " adjacent " 1 " and " 5 " is respectively less than " 7 ", so " 7 " are also local maximum.As can be seen that quality value sequence includes part Maximum value " 8 ", " 6 ", " 7 ".In turn, above-mentioned executing subject can choose adjacent two local maximums " 8 " and " 6 " composition office Portion's maximum value group, and choose adjacent two local maximums " 6 " and " 7 " and form local maximum group.
In some optional implementations of the present embodiment, above-mentioned executing subject can also be from least two local maximas Two local maximums that adjacent and corresponding difference is more than or equal to preset threshold are chosen in value.
Herein, preset threshold can be pre-set numerical value.It continues the example presented above, is " 3 for quality value sequence; 8;2;6;4;5;7;1 " above-mentioned executing subject determines three local maximums " 8 ", " 6 ", " 7 ".Preset threshold can be 2.Then Since the difference of " 6 " and " 7 " is less than 2;The difference of " 8 " and " 6 " is more than or equal to 2, thus above-mentioned executing subject can choose " 8 " and " 6 " the two local maximums.
It is appreciated that difference is bigger, the adjacent corresponding scene of two local maximums can for different scenes Energy property is then bigger.Therefore, this implementation can more accurately determine out the different local maximum of corresponding scene, have Help more precisely carry out cutting to target video according to scene.
It should be noted that obtaining at least two local maximum groups (here, when above-mentioned executing subject is based on step 203 One local maximum group includes two adjacent local maximums) when, above-mentioned executing subject can be at least two parts Each local maximum group in maximum value group, executes subsequent step 204 and step 205.
Step 204, from mass value in quality value sequence, between selected adjacent two local maximum The middle the smallest mass value of determination.
In the present embodiment, based on adjacent two local maximum chosen in step 203, above-mentioned executing subject can be with The smallest quality is determined from mass value in quality value sequence, between selected adjacent two local maximum Value.
Continue the example in step 203, quality value sequence is " 3;8;2;6;4;5;7;1 ", three parts determined are most Big value is that " 8 ", " 6 ", " 7 " can determine then for " 8 " and " 6 " the two adjacent local maximums from quality value sequence Mass value between the two local maximums only includes " 2 ", it is possible to which it is right that " 2 " are directly determined as " 8 " and " 6 " institute The smallest mass value answered;For " 6 " and " 7 " the two adjacent local maximums, position can be determined from quality value sequence Mass value between the two local maximums includes " 4 " and " 5 ", then above-mentioned executing subject " 4 " can be determined as " 6 " and The smallest mass value corresponding to " 7 ".
It is appreciated that the smallest mass value between adjacent local maximum images during usually corresponding to handoff scenario Therefore the movement time point the most apparent of head regards target based on the smallest mass value between adjacent local maximum Frequency carry out cutting facilitate would be more accurately target video cutting be different scenes under video.
Step 205, the time point based on corresponding to identified the smallest mass value carries out cutting to target video, obtains Obtain cutting rear video.
In the present embodiment, based on time point corresponding to the smallest mass value determined in step 204, above-mentioned execution master Body can carry out cutting to target video, obtain cutting rear video.
Specifically, above-mentioned executing subject can use various methods pair based on time point corresponding to the smallest mass value Target video carries out cutting.For example, above-mentioned executing subject can the directly time point cutting corresponding to the smallest mass value, make It obtains video frame in target video, that shooting obtains before at the time point and shoots the video frame of acquisition after this point Separately constitute cutting rear video.Herein, it should be noted that the shooting of the time point corresponding to the smallest mass value is obtained The video frame obtained may include in cutting rear video composed by the video frame that shooting obtains before at the time point, can also It is shot in cutting rear video composed by the video frame of acquisition after this point with being contained in, herein with no restrictions.
In some optional implementations of the present embodiment, for the cutting backsight in cutting rear video obtained Frequently, the label of the available video features for being used to characterize the cutting rear video of above-mentioned executing subject, and to the cutting backsight Frequency and acquired label are associated storage.Wherein, video features can be the various features of cutting rear video, including but not It is limited at least one of following: size, corresponding scene, the sequence in cutting rear video obtained.Wherein, obtained Cutting rear video can be arranged according to the sequencing of time.
Specifically, above-mentioned executing subject can adopt obtains label in various manners, for example, for cutting backsight obtained Cutting rear video in frequency, above-mentioned executing subject can export the cutting rear video, then obtain user's input, cut for this Divide the label of rear video.Alternatively, the above-mentioned available pre-stored tag set of executing subject, and then cut for obtained Cutting rear video in point rear video, above-mentioned executing subject can be identified that (such as identification should to the cutting rear video first The size of cutting rear video, or identify sequence of the cutting rear video in cutting rear video obtained), obtain identification knot Fruit (such as sequence in second), then from tag set (such as " 1;2;3;4 ") it is obtained corresponding to the cutting rear video in Label (such as 2).
This implementation is facilitated based on label quickly, conveniently by being associated storage to label and cutting rear video Ground extracts the video clip (cutting rear video) of target video, in turn, helps to improve the efficiency of video processing.
In some optional implementations of the present embodiment, above-mentioned executing subject may also respond to receive communication link The video acquisition request that the user terminal connect is sent, executes following steps: based on mark corresponding to cutting rear video obtained Label choose cutting rear video corresponding with video acquisition request from cutting rear video obtained;By selected cutting Rear video is sent to user terminal, and control user terminal plays out selected cutting rear video.
Herein, video acquisition request may include the information (example for being used to indicate the video features of video of request Such as text " segment 1 "), and then above-mentioned executing subject can be based on label (such as " 1 "), select from cutting rear video obtained Take cutting rear video corresponding with video acquisition request.It is appreciated that cutting rear video corresponding with video acquisition request The as corresponding video features cutting rear video same or similar with the indicated video features of video acquisition request.
In this implementation, terminal that user terminal can use for user, with the communication connection of above-mentioned executing subject. In practice, above-mentioned executing subject can send control signal to user terminal, and then control user terminal to selected cutting Rear video plays out.
The cutting rear video being cut into according to scene can be sent to user terminal, and control user by this implementation Terminal plays out cutting rear video, with this, can control user terminal and pointedly plays according to scene and video acquisition Corresponding video is requested, in turn, the result of broadcast of video can be improved;Also, target video is played with control user terminal It compares, this implementation can reduce the waste of resource.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for handling video of the present embodiment Figure.In the application scenarios of Fig. 3, (target regards the video of the cat that shooting acquisition is carried out to cat available first of server 301 Frequently) 302.Then, the quality value sequence 303 (" 8 of the video 302 of cat can be generated in server 301;9;6;4;7;3 "), wherein Time point in the time shaft for the video 302 that element in quality value sequence 303 is used to characterize cat and corresponding with the time point Mass value, the element in quality value sequence 303 are arranged according to the sequencing of time, and mass value is for characterizing in time shaft Time point video frame quality (for example, quality value sequence " 8;9;6;4;7;Mass value " 8 " in 3 " can be used for characterizing The quality of the video frame of 12:01 (12 points and 01 second) in the time shaft of video 302 in cat;" 9 ", which can be used for characterizing, to be in The quality of the video frame of 12:02 in the time shaft of the video 302 of cat;" 6 " can be used for characterize in cat video 302 when Between the quality ... of video frame of 12:03 and so in axis).Then, server 301 can be in response to determining mass value sequence Column 303 include at least two local maximums, respectively local maximum 3031 (" 9 ") and local maximum 3032 (" 7 "), choosing Take local maximum 3031 and local maximum 3032, wherein matter in quality value sequence 303, adjacent with local maximum Magnitude is less than local maximum.Then, server 301 can from it is in quality value sequence 303, be located at 3031 and of local maximum Mass value (" 6 between local maximum 3032;4 ") the smallest mass value 3033 (" 4 ") is determined in.Finally, server 301 can With the time point (" 12:04 ") based on corresponding to identified the smallest mass value 3033, cutting is carried out to the video 302 of cat, Obtain cutting rear video 304 and cutting rear video 305.
The method provided by the above embodiment of the disclosure can the quality based on video frame to video carry out cutting, due to clap During taking the photograph video, the quality that the process of scene switching normally results in video frame is reduced, so the quality based on video frame Ropy time point carries out cutting to video in change procedure, can carry out cutting to video according to different scenes, in turn Convenient for the subsequent video storage based on different scenes and video extraction;Also, in compared to the prior art artificially to video The method for carrying out cutting carries out cutting to video based on the quality of video frame, the efficiency of video slicing can be improved.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for handling video.The use In the process 400 of the method for processing video, comprising the following steps:
Step 401, target video is obtained.
In the present embodiment, can lead to for handling the executing subject (such as server shown in FIG. 1) of the method for video Wired connection mode or radio connection are crossed from remotely-or locally obtaining target video.Wherein, target video is to it The video handled.
In practice, target video can be the video that is shot and be obtained to actual scene.
Step 402, the quality value sequence of target video is generated.
In the present embodiment, based on target video obtained in step 401, target video is can be generated in above-mentioned executing subject Quality value sequence.Wherein, the element in quality value sequence be used to characterize time point in the time shaft of target video and with this Time point corresponding mass value.Element in quality value sequence is arranged according to the sequencing of time.Mass value is at characterization The quality of the video frame at the time point in time shaft.
Step 403, it based on the element in quality value sequence, generates using the time as abscissa, using mass value as ordinate Line graph.
In the present embodiment, based on the element in the quality value sequence generated in step 402, above-mentioned executing subject can be given birth to At using the time as abscissa, using mass value as the line graph of ordinate.
Specifically, above-mentioned executing subject can be by the element in quality value sequence using the time as abscissa, with mass value It is the point in the coordinate system of ordinate as known point, is generated by the way of fitting corresponding with the time for characterizing mass value The lines of relationship, as an example, Fig. 5 shows the schematic diagram of the line graph generated in the present embodiment.501 in figure be logical The lines that the mode of over-fitting generates." 12:01,12:02,12:03,12:04, the 12:05 " marked out on abscissa in figure Respectively correspond time point corresponding to the element in quality value sequence.Points pair corresponding with various time points in figure, on lines Answer the element in quality value sequence.
Step 404, in response to determining that quality value sequence includes at least two local maximums, from least two local maximas Two adjacent local maximums are chosen in value.
In the present embodiment, based on the quality value sequence obtained in step 402, above-mentioned executing subject can be in response to determination Quality value sequence includes at least two local maximums, and two adjacent local maximas are chosen from least two local maximums Value.Wherein, in quality value sequence, adjacent with local maximum mass value be less than local maximum.
Step 405, from mass value in quality value sequence, between selected adjacent two local maximum The middle the smallest mass value of determination.
In the present embodiment, based on adjacent two local maximum chosen in step 404, above-mentioned executing subject can be with The smallest quality is determined from mass value in quality value sequence, between selected adjacent two local maximum Value.
Above-mentioned steps 401, step 402, step 404 and step 405 can be respectively adopted and the step in previous embodiment 201, the similar mode of step 202, step 203 and step 204 executes, above with respect to step 201, step 202, step 203 and The description of step 204 is also applied for step 401, step 402, step 404 and step 405, and details are not described herein again.
Step 406, the object time range comprising time point corresponding to the smallest mass value is determined.
In the present embodiment, above-mentioned executing subject can include based on the smallest mass value obtained in step 405, determination The object time range at time point corresponding to the smallest mass value.Wherein, object time range is contained in selected adjacent Two local maximums corresponding time point composed by time range.Here, two adjacent local maximums point Time range composed by not corresponding time point is specially with one in adjacent two local maximums corresponding time point A time point is starting point, using another time point as the time range of terminal,
Specifically, above-mentioned executing subject can determine object time range using various methods, for example, above-mentioned executing subject Directly time range composed by selected adjacent two local maximum corresponding time point can be determined as Object time range, alternatively, can be composed by the selected adjacent two local maximum corresponding time point Determine the time range comprising time point corresponding to the smallest mass value as object time model in time range at random It encloses, alternatively, available preset duration t (such as 0.1s), it is assumed that time point corresponding to the smallest mass value is x, then by mesh Mark time range is determined as (x-t, x+t).
As an example, with reference to Fig. 5,502 in figure it is corresponding for selected adjacent two local maximum when Between time range composed by point (12:02 and 12:04).503 in figure be the object time range determined.
Step 407, target point is determined from the lines of line graph.
In the present embodiment, based on line graph obtained in step 403, above-mentioned executing subject can be from the lines of line graph Middle determining target point, wherein be the smallest within the scope of the object time that mass value corresponding to target point determines in a step 406 Mass value.
As an example, with reference to Fig. 5,504 in figure be target point, corresponding to mass value in object time range It is the smallest mass value in 503.
Step 408, based on the time corresponding to target point, cutting is carried out to target video, obtains cutting rear video.
In the present embodiment, based on the time corresponding to the target point determined in step 407, above-mentioned executing subject can be right Target video carries out cutting, obtains cutting rear video.
As an example, with reference to Fig. 5,505 in figure be the time corresponding to target point 504 (be approximately 12:02.8 (12 2.8 seconds minutes)).
Specifically, above-mentioned executing subject can be using various methods to target video based on the time corresponding to target point Carry out cutting.For example, above-mentioned executing subject can the directly time cutting corresponding to the target point so that in target video, The video frame that acquisition is shot before the time and the video frame for shooting acquisition after such time separately constitute cutting rear video. Herein, it should be noted that for corresponding to target point time shooting obtain video frame, may include in this when Between before composed by the video frame that obtains of shooting in cutting rear video, also may include and obtained in shooting after such time In cutting rear video composed by video frame, herein with no restrictions.
Figure 4, it is seen that the method for handling video compared with the corresponding embodiment of Fig. 2, in the present embodiment Process 400 highlight based on quality value sequence generate line graph, from the lines of line graph determine target point, and then be based on mesh Time corresponding to punctuate, the step of carrying out cutting to target video, obtain cutting rear video.The present embodiment describes as a result, Scheme can analyze video corresponding mass value at various moments based on line graph on the whole can be more smart with this It, in turn, can be more accurately according to different at the time of corresponding to minimum mass value during determining handoff scenario quasi-ly Scene carries out cutting to video, improves the accuracy of video processing;In addition, based on the smallest matter corresponding to quality value sequence Time point corresponding to magnitude determines target point, can reduce the range of search of the searched targets point on the lines of line graph, with This, can be improved the efficiency of determining target point, reduce the consumption for determining the resource of target point.
With further reference to Fig. 6, as the realization to method shown in above-mentioned each figure, present disclose provides a kind of for video One embodiment of device, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to In various electronic equipments.
As shown in fig. 6, the device 600 for handling video of the present embodiment includes: that acquiring unit 601, first generates list Member 602, selection unit 603, determination unit 604 and cutting unit 605.Wherein, acquiring unit 601 is configured to obtain target view Frequently;First generation unit 602 is configured to generate the quality value sequence of target video, wherein the element in quality value sequence is used Time point and mass value corresponding with the time point in the time shaft of characterization target video, when mass value is in for characterizing Between time point in axis video frame quality, and the element in quality value sequence is arranged according to the sequencing of time;It chooses Unit 603 is configured in response to determine that quality value sequence includes at least two local maximums, from least two local maximas Two adjacent local maximums are chosen in value, wherein mass value in quality value sequence, adjacent with local maximum is less than Local maximum;Determination unit 604 be configured to it is from quality value sequence, positioned at selected adjacent two local maxima The smallest mass value is determined in mass value between value;Cutting unit 605 is configured to based on identified the smallest mass value Corresponding time point carries out cutting to target video, obtains cutting rear video.
In the present embodiment, for handle video device 600 acquiring unit 601 can by wired connection mode or Person's radio connection is from remotely-or locally obtaining target video.Wherein, target video is the video to be handled it.It is real In trampling, target video can be the video that is shot and be obtained to actual scene.
In the present embodiment, mesh can be generated in the target video obtained based on acquiring unit 601, the first generation unit 602 Mark the quality value sequence of video.Wherein, the element in quality value sequence is used to characterize the time point in the time shaft of target video With mass value corresponding with the time point.Element in quality value sequence is arranged according to the sequencing of time.Mass value is used for The quality of the video frame at time point of the characterization in time shaft.
In the present embodiment, the quality value sequence obtained based on the first generation unit 602, selection unit 603 can respond In determining that quality value sequence includes at least two local maximums, two adjacent offices are chosen from least two local maximums Portion's maximum value.Wherein, in quality value sequence, adjacent with local maximum mass value be less than local maximum.
In the present embodiment, adjacent two local maximum chosen based on selection unit 603, determination unit 604 can To determine the smallest matter from mass value in quality value sequence, between selected adjacent two local maximum Magnitude.
In the present embodiment, time point corresponding to the smallest mass value determined based on determination unit 604, cutting unit 605 can carry out cutting to target video, obtain cutting rear video.
In some optional implementations of the present embodiment, device 600 can also include: the second generation unit (in figure It is not shown), it is configured to generate based on the element in quality value sequence using the time as abscissa, using mass value as ordinate Line graph;And cutting unit 605 may include: the first determining module (not shown), be configured to determine comprising minimum Mass value corresponding to time point object time range, wherein object time range is contained in selected adjacent two Time range composed by a local maximum corresponding time point;Second determining module (not shown), is configured At target point determining from the lines of line graph, wherein mass value corresponding to target point is minimum within the object time Mass value;Cutting module (not shown) is configured to cut target video based on the time corresponding to target point Point, obtain cutting rear video.
In some optional implementations of the present embodiment, the first generation unit 602 may include: execution module (figure In be not shown), be configured to execute following steps for the time point in time shaft corresponding to target video: from should Destination number video frame is chosen in the video frame at time point;The video frame of selected destination number video frame is determined respectively Mass value;Based on identified destination number video frame mass value, mass value corresponding to the time point is determined;Generation module (not shown) is configured to the sequencing according to corresponding time point, arranges identified mass value, raw At quality value sequence.
In some optional implementations of the present embodiment, execution module can be further configured to: for selected The video frame in destination number video frame taken executes following steps: carrying out Laplace transform to the video frame, is become Change rear video frame;Determine the variance of the pixel value of the pixel in transformation rear video frame;Identified variance is determined as the view The video frame mass value of frequency frame.
In some optional implementations of the present embodiment, selection unit 603 can be further configured to: from least Two local maximums that adjacent and corresponding difference is more than or equal to preset threshold are chosen in two local maximums.
In some optional implementations of the present embodiment, device 600 can also include: that storage unit (is not shown in figure Out), it is configured to obtain the view for characterizing the cutting rear video for the cutting rear video in cutting rear video obtained The label of frequency feature, and storage is associated to the cutting rear video and acquired label.
In some optional implementations of the present embodiment, device 600 can also include: transmission unit, be configured to The video acquisition request that user terminal in response to receiving communication connection is sent, is executed following steps: being cut based on obtained Divide label corresponding to rear video, cutting backsight corresponding with video acquisition request is chosen from cutting rear video obtained Frequently;Selected cutting rear video is sent to user terminal, and control user terminal to selected cutting rear video into Row plays.
It is understood that all units recorded in the device 600 and each step phase in the method with reference to Fig. 2 description It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 600 and its In include unit, details are not described herein.
The device provided by the above embodiment 600 of the disclosure can the quality based on video frame to video carry out cutting, by During shooting video, the quality that the process of scene switching normally results in video frame is reduced, so based on video frame Ropy time point carries out cutting to video during mass change, can carry out cutting to video according to different scenes, And then convenient for the subsequent video storage based on different scenes and video extraction;Also, it is artificially right in compared to the prior art The method that video carries out cutting carries out cutting to video based on the quality of video frame, the efficiency of video slicing can be improved.
Below with reference to Fig. 7, it illustrates the electronic equipment (end of example as shown in figure 1 for being suitable for being used to realize the embodiment of the present disclosure End equipment 101,102,103 or server 105) 700 structural schematic diagram.Terminal device in the embodiment of the present disclosure may include But it is not limited to such as mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (plate electricity Brain), PMP (portable media player), the mobile terminal of car-mounted terminal (such as vehicle mounted guidance terminal) etc. and such as The fixed terminal of digital TV, desktop computer etc..Electronic equipment shown in Fig. 7 is only an example, should not be to the disclosure The function and use scope of embodiment bring any restrictions.
As shown in fig. 7, electronic equipment 700 may include processing unit (such as central processing unit, graphics processor etc.) 701, random access can be loaded into according to the program being stored in read-only memory (ROM) 702 or from storage device 708 Program in memory (RAM) 703 and execute various movements appropriate and processing.In RAM 703, it is also stored with electronic equipment Various programs and data needed for 700 operations.Processing unit 701, ROM 702 and RAM 703 pass through the phase each other of bus 704 Even.Input/output (I/O) interface 705 is also connected to bus 704.
In general, following device can connect to I/O interface 705: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph As the input unit 706 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration The output device 707 of dynamic device etc.;Storage device 708 including such as tape, hard disk etc.;And communication device 709.Communication device 709, which can permit electronic equipment 700, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 7 shows tool There is the electronic equipment 700 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 709, or from storage device 708 It is mounted, or is mounted from ROM 702.When the computer program is executed by processing unit 701, the embodiment of the present disclosure is executed Method in the above-mentioned function that limits.
It should be noted that computer-readable medium described in the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.Above-mentioned computer-readable medium carries one or more program, when said one or more When a program is executed by the electronic equipment, so that the electronic equipment: obtaining target video;Generate the mass value sequence of target video Column, wherein element in quality value sequence is used to characterize time point in the time shaft of target video and corresponding with the time point Mass value, mass value is used to characterize the quality of the video frame at the time point being in time shaft, and the member in quality value sequence Element is arranged according to the sequencing of time;In response to determining that quality value sequence includes at least two local maximums, from least two Two adjacent local maximums are chosen in a local maximum, wherein in quality value sequence, adjacent with local maximum Mass value is less than local maximum;From it is in quality value sequence, between selected adjacent two local maximum The smallest mass value is determined in mass value;The time point based on corresponding to identified the smallest mass value, to target video into Row cutting obtains cutting rear video.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+ +, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package, Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, obtaining Unit is taken to be also described as " obtaining the unit of target video ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (16)

1. a kind of method for handling video, comprising:
Obtain target video;
Generate the quality value sequence of the target video, wherein the element in the quality value sequence is for characterizing the target Time point and mass value corresponding with the time point in the time shaft of video, mass value be used for characterize be in time shaft in when Between the quality of video frame put, and the element in the quality value sequence is arranged according to the sequencing of time;
It include at least two local maximums in response to the determination quality value sequence, from least two local maximum Choose two adjacent local maximums, wherein mass value in the quality value sequence, adjacent with local maximum is less than Local maximum;
It is determined most from mass value in the quality value sequence, between selected adjacent two local maximum Small mass value;
The time point based on corresponding to identified the smallest mass value carries out cutting to the target video, after obtaining cutting Video.
2. according to the method described in claim 1, wherein, after the quality value sequence for generating the target video, institute State method further include:
Based on the element in the quality value sequence, generate using the time as abscissa, using mass value as the line graph of ordinate;With And
The time point based on corresponding to identified the smallest mass value carries out cutting to the target video, is cut Divide rear video, comprising:
Determine the object time range comprising time point corresponding to the smallest mass value, wherein the object time range packet Contained in time range composed by selected adjacent two local maximum corresponding time point;
Target point is determined from the lines of the line graph, wherein mass value corresponding to the target point is in the target Between be the smallest mass value in range;
Based on the time corresponding to the target point, cutting is carried out to the target video, obtains cutting rear video.
3. according to the method described in claim 1, wherein, the quality value sequence for generating the target video includes:
For the time point in time shaft corresponding to the target video, following steps are executed: from the view for being in the time point Destination number video frame is chosen in frequency frame;The video frame mass value of selected destination number video frame is determined respectively;Base In identified destination number video frame mass value, mass value corresponding to the time point is determined;
According to the sequencing at corresponding time point, identified mass value is arranged, generates quality value sequence.
4. according to the method described in claim 3, wherein, the video for determining selected destination number video frame respectively Frame mass value includes:
For the video frame in selected destination number video frame, following steps are executed: La Pula is carried out to the video frame This transformation obtains transformation rear video frame;Determine the variance of the pixel value of the pixel in transformation rear video frame;By identified side Difference is determined as the video frame mass value of the video frame.
5. described to choose adjacent two from least two local maximum according to the method described in claim 1, wherein A local maximum includes:
Two that adjacent and corresponding difference is more than or equal to preset threshold are chosen from least two local maximum Local maximum.
6. method described in one of -5 according to claim 1, wherein the method also includes:
For the cutting rear video in cutting rear video obtained, the video features for characterizing the cutting rear video are obtained Label, and storage is associated to the cutting rear video and acquired label.
7. according to the method described in claim 6, wherein, the method also includes:
The video acquisition request that user terminal in response to receiving communication connection is sent, executes following steps: based on being obtained Cutting rear video corresponding to label, from cutting rear video obtained choose it is corresponding with video acquisition request Cutting rear video;Selected cutting rear video is sent to the user terminal, and the control user terminal to selected The cutting rear video taken plays out.
8. a kind of for handling the device of video, comprising:
Acquiring unit is configured to obtain target video;
First generation unit is configured to generate the quality value sequence of the target video, wherein in the quality value sequence Time point and mass value corresponding with the time point, mass value in time shaft of the element for characterizing the target video are used for The quality of the video frame at time point of the characterization in the time shaft, and the element in the quality value sequence is successive according to the time Sequence arranges;
Selection unit is configured in response to determine that the quality value sequence includes at least two local maximums, from it is described to Two adjacent local maximums are chosen in few two local maximums, wherein in the quality value sequence and local maxima It is worth adjacent mass value less than local maximum;
Determination unit, be configured to it is from the quality value sequence, positioned at selected adjacent two local maximum it Between mass value in determine the smallest mass value;
Cutting unit is configured to the time point based on corresponding to identified the smallest mass value, to the target video into Row cutting obtains cutting rear video.
9. device according to claim 8, wherein described device further include:
Second generation unit is configured to generate based on the element in the quality value sequence using the time as abscissa, with quality Value is the line graph of ordinate;And
The cutting unit includes:
First determining module is configured to determine the object time range comprising time point corresponding to the smallest mass value, In, the object time range was contained in composed by selected adjacent two local maximum corresponding time point Time range;
Second determining module is configured to determine target point from the lines of the line graph, wherein corresponding to the target point Mass value be the smallest mass value within the object time;
Cutting module, was configured to based on the time corresponding to the target point, carries out cutting to the target video, is cut Divide rear video.
10. device according to claim 8, wherein first generation unit includes:
Execution module is configured to execute following steps for the time point in time shaft corresponding to the target video: from Destination number video frame is chosen in video frame in the time point;Selected destination number video frame is determined respectively Video frame mass value;Based on identified destination number video frame mass value, mass value corresponding to the time point is determined;
Generation module is configured to the sequencing according to corresponding time point, arranges identified mass value, raw At quality value sequence.
11. device according to claim 10, wherein the execution module is further configured to:
For the video frame in selected destination number video frame, following steps are executed: La Pula is carried out to the video frame This transformation obtains transformation rear video frame;Determine the variance of the pixel value of the pixel in transformation rear video frame;By identified side Difference is determined as the video frame mass value of the video frame.
12. device according to claim 8, wherein the selection unit is further configured to:
Two that adjacent and corresponding difference is more than or equal to preset threshold are chosen from least two local maximum Local maximum.
13. the device according to one of claim 8-12, wherein described device further include:
Storage unit is configured to obtain for characterizing the cutting cutting rear video in cutting rear video obtained The label of the video features of rear video, and storage is associated to the cutting rear video and acquired label.
14. device according to claim 13, wherein described device further include:
Transmission unit is configured in response to receive the video acquisition request that the user terminal of communication connection is sent, execute with Lower step: it based on label corresponding to cutting rear video obtained, is chosen and the view from cutting rear video obtained The corresponding cutting rear video of frequency acquisition request;Selected cutting rear video is sent to the user terminal, and control The user terminal plays out selected cutting rear video.
15. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real The now method as described in any in claim 1-7.
16. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor Method as described in any in claim 1-7.
CN201910492581.8A 2019-06-06 2019-06-06 Method and apparatus for processing video Active CN110087145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910492581.8A CN110087145B (en) 2019-06-06 2019-06-06 Method and apparatus for processing video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910492581.8A CN110087145B (en) 2019-06-06 2019-06-06 Method and apparatus for processing video

Publications (2)

Publication Number Publication Date
CN110087145A true CN110087145A (en) 2019-08-02
CN110087145B CN110087145B (en) 2020-04-21

Family

ID=67423816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910492581.8A Active CN110087145B (en) 2019-06-06 2019-06-06 Method and apparatus for processing video

Country Status (1)

Country Link
CN (1) CN110087145B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080069208A1 (en) * 1992-01-29 2008-03-20 Tomohiro Ueda High-efficiency encoder and video information recording/reproducing apparatus
CN101443850A (en) * 2006-05-12 2009-05-27 惠普开发有限公司 Key-frame extraction from video
CN102025892A (en) * 2009-09-16 2011-04-20 索尼株式会社 Lens conversion detection method and device
CN104902342A (en) * 2014-03-07 2015-09-09 无锡天脉聚源传媒科技有限公司 Scene frame based system and method for locating video
CN106462744A (en) * 2014-06-12 2017-02-22 微软技术许可有限责任公司 Rule-based video importance analysis
CN107135401A (en) * 2017-03-31 2017-09-05 北京奇艺世纪科技有限公司 Key frame extraction method and system
CN107424163A (en) * 2017-06-09 2017-12-01 广东技术师范学院 A kind of lens boundary detection method based on TextTiling
CN108696671A (en) * 2017-04-10 2018-10-23 英特尔公司 For accelerating Scene change detection and realizing the technology that adaptive content shows
CN108764060A (en) * 2018-05-07 2018-11-06 中国传媒大学 Video lens edge detection method based on sliding window
US20180322636A1 (en) * 2016-08-23 2018-11-08 Disney Enterprises, Inc. Video Object Tagging using Synthetic Images and Segmentation Hierarchies
CN109151501A (en) * 2018-10-09 2019-01-04 北京周同科技有限公司 A kind of video key frame extracting method, device, terminal device and storage medium
CN109688433A (en) * 2018-11-22 2019-04-26 深圳艺达文化传媒有限公司 Shearing video method and Related product based on picture

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080069208A1 (en) * 1992-01-29 2008-03-20 Tomohiro Ueda High-efficiency encoder and video information recording/reproducing apparatus
CN101443850A (en) * 2006-05-12 2009-05-27 惠普开发有限公司 Key-frame extraction from video
CN102025892A (en) * 2009-09-16 2011-04-20 索尼株式会社 Lens conversion detection method and device
CN104902342A (en) * 2014-03-07 2015-09-09 无锡天脉聚源传媒科技有限公司 Scene frame based system and method for locating video
CN106462744A (en) * 2014-06-12 2017-02-22 微软技术许可有限责任公司 Rule-based video importance analysis
US20180322636A1 (en) * 2016-08-23 2018-11-08 Disney Enterprises, Inc. Video Object Tagging using Synthetic Images and Segmentation Hierarchies
CN107135401A (en) * 2017-03-31 2017-09-05 北京奇艺世纪科技有限公司 Key frame extraction method and system
CN108696671A (en) * 2017-04-10 2018-10-23 英特尔公司 For accelerating Scene change detection and realizing the technology that adaptive content shows
CN107424163A (en) * 2017-06-09 2017-12-01 广东技术师范学院 A kind of lens boundary detection method based on TextTiling
CN108764060A (en) * 2018-05-07 2018-11-06 中国传媒大学 Video lens edge detection method based on sliding window
CN109151501A (en) * 2018-10-09 2019-01-04 北京周同科技有限公司 A kind of video key frame extracting method, device, terminal device and storage medium
CN109688433A (en) * 2018-11-22 2019-04-26 深圳艺达文化传媒有限公司 Shearing video method and Related product based on picture

Also Published As

Publication number Publication date
CN110087145B (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN108416310B (en) Method and apparatus for generating information
CN111476871B (en) Method and device for generating video
CN109740018B (en) Method and device for generating video label model
CN109308469B (en) Method and apparatus for generating information
CN108769745A (en) Video broadcasting method and device
CN108960316A (en) Method and apparatus for generating model
CN109862100B (en) Method and device for pushing information
CN109308490A (en) Method and apparatus for generating information
US10614621B2 (en) Method and apparatus for presenting information
CN109829432A (en) Method and apparatus for generating information
CN110084317A (en) The method and apparatus of image for identification
CN110059623A (en) Method and apparatus for generating information
CN109829164A (en) Method and apparatus for generating text
CN110032978A (en) Method and apparatus for handling video
CN109582825A (en) Method and apparatus for generating information
CN110046571A (en) The method and apparatus at age for identification
WO2020088048A1 (en) Method and apparatus for processing information
CN109919220B (en) Method and apparatus for generating feature vectors of video
CN109492687A (en) Method and apparatus for handling information
CN110008926A (en) The method and apparatus at age for identification
CN110335237B (en) Method and device for generating model and method and device for recognizing image
CN108985178A (en) Method and apparatus for generating information
CN109034085B (en) Method and apparatus for generating information
CN108966034B (en) Virtual article display method, device and storage medium
CN111027495A (en) Method and device for detecting key points of human body

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder