Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for handling video of the disclosure or the implementation of the device for handling video
The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as video processing class is answered on terminal device 101,102,103
With, web browser applications, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be the various electronic equipments with camera, including but not limited to smart phone, tablet computer, e-book reading
(Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player
Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression
Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is
When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with
To provide the multiple softwares or software module of Distributed Services), single software or software module also may be implemented into.It does not do herein
It is specific to limit.
Server 105 can be to provide the server of various services, such as shoot and obtain to terminal device 101,102,103
The video processing service device that is handled of target video.Video processing service device can be to data such as the target videos received
It carries out the processing such as analyzing, and obtains processing result (such as cutting rear video).
It should be noted that can be by terminal device for handling the method for video provided by embodiment of the disclosure
101, it 102,103 executes, can also be executed by server 105, correspondingly, the device for handling video can be set in terminal
In equipment 101,102,103, also it can be set in server 105.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module)
It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.Used number during generating cutting rear video
According to not needing in the case where long-range obtain, above system framework can not include network, and only include terminal device or service
Device.
With continued reference to Fig. 2, the process of one embodiment of the method for handling video according to the disclosure is shown
200.The method for being used to handle video, comprising the following steps:
Step 201, target video is obtained.
In the present embodiment, can lead to for handling the executing subject (such as server shown in FIG. 1) of the method for video
Wired connection mode or radio connection are crossed from remotely-or locally obtaining target video.Wherein, target video is to it
The video handled.
In practice, target video can be the video that is shot and be obtained to actual scene.
Step 202, the quality value sequence of target video is generated.
In the present embodiment, based on target video obtained in step 201, target video is can be generated in above-mentioned executing subject
Quality value sequence.Wherein, the element in quality value sequence be used to characterize time point in the time shaft of target video and with this
Time point corresponding mass value.
Specifically, the time shaft of target video can be used for connecting to video frame according to the sequencing of time, obtain
To target video.It is appreciated that while shooting obtains target video, it can determine the time shaft of target video.Practice
In, the minimum unit on time shaft can be 1 second, i.e., be divided between two neighboring time point 1 second on time shaft.As an example,
The starting point of time shaft can be 00:00 (0 point and 0 second);First time point can be 00:01 (0 point and 01 second);Second time point
It can be 00:02 (0 point and 02 second);Third time point can be 00:03 (0 point and 03 second) ....
In the present embodiment, the element in quality value sequence is arranged according to the sequencing of time.In quality value sequence
Time point corresponding to element can be continuous time point on time shaft, or at discrete time point, not limit herein
System.Specifically, time point corresponding to element in quality value sequence can be predetermined for technical staff, on a timeline
Time point in predeterminated position, or whole time point on time shaft, or may be random from time shaft
Determining time point.
In the present embodiment, mass value is used to characterize the quality of the video frame at the time point in time shaft.Specifically,
The video frame at the time point in time shaft can be at the time point and in time shaft, adjacent with the time point be next
The video frame that shooting obtains in period composed by time point.For example, being in the time for " 00:02 " this time point
The video frame of point can be the video frame of the shooting acquisition within " 00:02-00:03 " this period.
In the present embodiment, mass value corresponding to the time point in quality value sequence can be by being in the time point
The quality of video frame determines.Specifically, as an example, for time point corresponding to the element in quality value sequence, it is above-mentioned to hold
Row main body can obtain the video frame in the time point first, then determine the video frame quality of acquired each video frame
Value finally carries out mean operation to identified video frame mass value, obtains operation result as matter corresponding to the time point
Magnitude.Here, video frame mass value can be used for characterizing the quality of video frame.Specifically, video frame mass value is bigger, it can be with table
The quality for levying video frame is more excellent.
Here it is possible to the video frame mass value of video frame be determined using various methods, for example, can use existing image
Quality Identification model determines the video frame mass value of video frame.
In some optional implementations of the present embodiment, above-mentioned executing subject can generate target by following steps
The quality value sequence of video: firstly, for the time point in time shaft corresponding to target video, above-mentioned executing subject can be held
Row following steps: destination number video frame is chosen from the video frame in the time point;Selected target is determined respectively
The video frame mass value of quantity video frame;Based on identified destination number video frame mass value, the time point institute is determined
Corresponding mass value.Then, above-mentioned executing subject can be according to the sequencing at corresponding time point, to identified quality
Value is arranged, and quality value sequence is generated.
In this implementation, destination number can be predetermined quantity, or based in the time point
Video frame quantity determine quantity (for example, in the time point video frame quantity half).Specifically
, above-mentioned executing subject can choose destination number video frame using various methods, such as can be using the side randomly selected
Formula is chosen, alternatively, can choose according to the sequencing of time.
Herein, above-mentioned executing subject can determine the video frame quality of video frame using above-mentioned picture quality identification model
Value.In addition, in some optional implementations of the present embodiment, for the video in selected destination number video frame
Following steps can also be performed in frame, above-mentioned executing subject: carrying out Laplace transform to the video frame, obtain transformation rear video
Frame;Determine the variance of the pixel value of the pixel in transformation rear video frame;Identified variance is determined as to the view of the video frame
Frequency frame mass value.
In practice, the region flat for image (region of the gray value without transformation), the image after Laplace transform exists
The intensity value in the region is 0.At the region that gray value of image acutely converts (edge), the intensity of the image after Laplace transform
It is relatively large to be worth (absolute value).Therefore, Laplace transform is commonly used for the marginal information of detection image.It in turn, can be with
Understand, for the video frame (transformation rear video frame) of edge enhancing, variance is bigger, and the edge that can characterize video frame is more clear,
And then the quality that can characterize video frame is more excellent.
In this implementation, above-mentioned executing subject can be adopted based on identified destination number video frame mass value
With various methods determine the time point corresponding to mass value.For example, can be to identified destination number video frame quality
Value carries out mean operation, obtains operation result as mass value corresponding to the time point;Alternatively, destination number can be regarded
Maximum video frame mass value is determined as mass value corresponding to the time point in frequency frame mass value.
Specifically, for mass value corresponding to identified time point, above-mentioned executing subject can according to time point by
Sequence after arriving first arranges identified mass value, generates quality value sequence;Alternatively, can according to time point after to
Preceding sequence arranges identified mass value, generates quality value sequence.
Step 203, in response to determining that quality value sequence includes at least two local maximums, from least two local maximas
Two adjacent local maximums are chosen in value.
In the present embodiment, based on the quality value sequence obtained in step 202, above-mentioned executing subject can be in response to determination
Quality value sequence includes at least two local maximums, and two adjacent local maximas are chosen from least two local maximums
Value.Wherein, in quality value sequence, adjacent with local maximum mass value be less than local maximum.
It is appreciated that when being shot using camera, if the target of expectation shooting changes, in order to continue to shoot
New target, may move camera, so that target is contained in coverage.In this case, the movement of camera
Meeting is so that the clarity for the video frame that shooting obtains is low, correspondingly, so that the mass value that shooting obtains video frame is small.When target packet
When contained in coverage, camera then may be at relatively steady state, at this point, the clarity for the video frame that shooting obtains
Height, correspondingly, the mass value that shooting obtains video frame are big.And when needing handoff scenario, when shooting new target, then need again
Dollying head in this case, shoots the clear of the video frame of acquisition so that the fresh target of switching is contained in coverage
Clear degree can decline, and correspondingly, the mass value that shooting obtains video frame reduces.When the fresh target of switching is contained in coverage
When, camera can then be again at relatively steady state, and the clarity for shooting the video frame of acquisition can increase again, accordingly
Ground, the mass value that shooting obtains video frame can become larger.
In turn, in the present embodiment, local maximum can correspond to relatively stable at camera in shooting process
State, and quality value sequence include at least two local maximums, camera in shooting process can be characterized and occurred at least
Variation from moving condition to metastable state twice can characterize in turn and scene switching occurred.Also, adjacent two
The change procedure of mass value between a local maximum can correspond to the process of scene switching.
Particularly, since local maximum usually corresponds to the metastable state that camera is in shooting process,
Shooting initial phase and shooting ending phase do not correspond to local maximum usually, and in turn, in the present embodiment, local maximum can
Think and be corresponding with two adjacent mass values in quality value sequence, and is greater than the quality of two corresponding, adjacent mass values
Value.
Herein, above-mentioned executing subject can choose two adjacent local maximums using various methods.Specifically, working as
When only including two local maximums at least two local maximums that quality value sequence includes, above-mentioned executing subject can be straight
Selecting takes the two local maximums.When at least two local maximums include at least three local maximums, at least
Every two adjacent local maximum in three local maximums, above-mentioned executing subject choose the two adjacent local maximas
Value composition local maximum group.
As an example, quality value sequence is " 3;8;2;6;4;5;7;1 ", wherein " 2 " and " 3 " adjacent with " 8 " are respectively less than
" 8 ", so " 8 " are local maximum;" 4 " and " 2 " adjacent with " 6 " are respectively less than " 6 ", so " 6 " are local maximum;With
" 7 " adjacent " 1 " and " 5 " is respectively less than " 7 ", so " 7 " are also local maximum.As can be seen that quality value sequence includes part
Maximum value " 8 ", " 6 ", " 7 ".In turn, above-mentioned executing subject can choose adjacent two local maximums " 8 " and " 6 " composition office
Portion's maximum value group, and choose adjacent two local maximums " 6 " and " 7 " and form local maximum group.
In some optional implementations of the present embodiment, above-mentioned executing subject can also be from least two local maximas
Two local maximums that adjacent and corresponding difference is more than or equal to preset threshold are chosen in value.
Herein, preset threshold can be pre-set numerical value.It continues the example presented above, is " 3 for quality value sequence;
8;2;6;4;5;7;1 " above-mentioned executing subject determines three local maximums " 8 ", " 6 ", " 7 ".Preset threshold can be 2.Then
Since the difference of " 6 " and " 7 " is less than 2;The difference of " 8 " and " 6 " is more than or equal to 2, thus above-mentioned executing subject can choose " 8 " and
" 6 " the two local maximums.
It is appreciated that difference is bigger, the adjacent corresponding scene of two local maximums can for different scenes
Energy property is then bigger.Therefore, this implementation can more accurately determine out the different local maximum of corresponding scene, have
Help more precisely carry out cutting to target video according to scene.
It should be noted that obtaining at least two local maximum groups (here, when above-mentioned executing subject is based on step 203
One local maximum group includes two adjacent local maximums) when, above-mentioned executing subject can be at least two parts
Each local maximum group in maximum value group, executes subsequent step 204 and step 205.
Step 204, from mass value in quality value sequence, between selected adjacent two local maximum
The middle the smallest mass value of determination.
In the present embodiment, based on adjacent two local maximum chosen in step 203, above-mentioned executing subject can be with
The smallest quality is determined from mass value in quality value sequence, between selected adjacent two local maximum
Value.
Continue the example in step 203, quality value sequence is " 3;8;2;6;4;5;7;1 ", three parts determined are most
Big value is that " 8 ", " 6 ", " 7 " can determine then for " 8 " and " 6 " the two adjacent local maximums from quality value sequence
Mass value between the two local maximums only includes " 2 ", it is possible to which it is right that " 2 " are directly determined as " 8 " and " 6 " institute
The smallest mass value answered;For " 6 " and " 7 " the two adjacent local maximums, position can be determined from quality value sequence
Mass value between the two local maximums includes " 4 " and " 5 ", then above-mentioned executing subject " 4 " can be determined as " 6 " and
The smallest mass value corresponding to " 7 ".
It is appreciated that the smallest mass value between adjacent local maximum images during usually corresponding to handoff scenario
Therefore the movement time point the most apparent of head regards target based on the smallest mass value between adjacent local maximum
Frequency carry out cutting facilitate would be more accurately target video cutting be different scenes under video.
Step 205, the time point based on corresponding to identified the smallest mass value carries out cutting to target video, obtains
Obtain cutting rear video.
In the present embodiment, based on time point corresponding to the smallest mass value determined in step 204, above-mentioned execution master
Body can carry out cutting to target video, obtain cutting rear video.
Specifically, above-mentioned executing subject can use various methods pair based on time point corresponding to the smallest mass value
Target video carries out cutting.For example, above-mentioned executing subject can the directly time point cutting corresponding to the smallest mass value, make
It obtains video frame in target video, that shooting obtains before at the time point and shoots the video frame of acquisition after this point
Separately constitute cutting rear video.Herein, it should be noted that the shooting of the time point corresponding to the smallest mass value is obtained
The video frame obtained may include in cutting rear video composed by the video frame that shooting obtains before at the time point, can also
It is shot in cutting rear video composed by the video frame of acquisition after this point with being contained in, herein with no restrictions.
In some optional implementations of the present embodiment, for the cutting backsight in cutting rear video obtained
Frequently, the label of the available video features for being used to characterize the cutting rear video of above-mentioned executing subject, and to the cutting backsight
Frequency and acquired label are associated storage.Wherein, video features can be the various features of cutting rear video, including but not
It is limited at least one of following: size, corresponding scene, the sequence in cutting rear video obtained.Wherein, obtained
Cutting rear video can be arranged according to the sequencing of time.
Specifically, above-mentioned executing subject can adopt obtains label in various manners, for example, for cutting backsight obtained
Cutting rear video in frequency, above-mentioned executing subject can export the cutting rear video, then obtain user's input, cut for this
Divide the label of rear video.Alternatively, the above-mentioned available pre-stored tag set of executing subject, and then cut for obtained
Cutting rear video in point rear video, above-mentioned executing subject can be identified that (such as identification should to the cutting rear video first
The size of cutting rear video, or identify sequence of the cutting rear video in cutting rear video obtained), obtain identification knot
Fruit (such as sequence in second), then from tag set (such as " 1;2;3;4 ") it is obtained corresponding to the cutting rear video in
Label (such as 2).
This implementation is facilitated based on label quickly, conveniently by being associated storage to label and cutting rear video
Ground extracts the video clip (cutting rear video) of target video, in turn, helps to improve the efficiency of video processing.
In some optional implementations of the present embodiment, above-mentioned executing subject may also respond to receive communication link
The video acquisition request that the user terminal connect is sent, executes following steps: based on mark corresponding to cutting rear video obtained
Label choose cutting rear video corresponding with video acquisition request from cutting rear video obtained;By selected cutting
Rear video is sent to user terminal, and control user terminal plays out selected cutting rear video.
Herein, video acquisition request may include the information (example for being used to indicate the video features of video of request
Such as text " segment 1 "), and then above-mentioned executing subject can be based on label (such as " 1 "), select from cutting rear video obtained
Take cutting rear video corresponding with video acquisition request.It is appreciated that cutting rear video corresponding with video acquisition request
The as corresponding video features cutting rear video same or similar with the indicated video features of video acquisition request.
In this implementation, terminal that user terminal can use for user, with the communication connection of above-mentioned executing subject.
In practice, above-mentioned executing subject can send control signal to user terminal, and then control user terminal to selected cutting
Rear video plays out.
The cutting rear video being cut into according to scene can be sent to user terminal, and control user by this implementation
Terminal plays out cutting rear video, with this, can control user terminal and pointedly plays according to scene and video acquisition
Corresponding video is requested, in turn, the result of broadcast of video can be improved;Also, target video is played with control user terminal
It compares, this implementation can reduce the waste of resource.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for handling video of the present embodiment
Figure.In the application scenarios of Fig. 3, (target regards the video of the cat that shooting acquisition is carried out to cat available first of server 301
Frequently) 302.Then, the quality value sequence 303 (" 8 of the video 302 of cat can be generated in server 301;9;6;4;7;3 "), wherein
Time point in the time shaft for the video 302 that element in quality value sequence 303 is used to characterize cat and corresponding with the time point
Mass value, the element in quality value sequence 303 are arranged according to the sequencing of time, and mass value is for characterizing in time shaft
Time point video frame quality (for example, quality value sequence " 8;9;6;4;7;Mass value " 8 " in 3 " can be used for characterizing
The quality of the video frame of 12:01 (12 points and 01 second) in the time shaft of video 302 in cat;" 9 ", which can be used for characterizing, to be in
The quality of the video frame of 12:02 in the time shaft of the video 302 of cat;" 6 " can be used for characterize in cat video 302 when
Between the quality ... of video frame of 12:03 and so in axis).Then, server 301 can be in response to determining mass value sequence
Column 303 include at least two local maximums, respectively local maximum 3031 (" 9 ") and local maximum 3032 (" 7 "), choosing
Take local maximum 3031 and local maximum 3032, wherein matter in quality value sequence 303, adjacent with local maximum
Magnitude is less than local maximum.Then, server 301 can from it is in quality value sequence 303, be located at 3031 and of local maximum
Mass value (" 6 between local maximum 3032;4 ") the smallest mass value 3033 (" 4 ") is determined in.Finally, server 301 can
With the time point (" 12:04 ") based on corresponding to identified the smallest mass value 3033, cutting is carried out to the video 302 of cat,
Obtain cutting rear video 304 and cutting rear video 305.
The method provided by the above embodiment of the disclosure can the quality based on video frame to video carry out cutting, due to clap
During taking the photograph video, the quality that the process of scene switching normally results in video frame is reduced, so the quality based on video frame
Ropy time point carries out cutting to video in change procedure, can carry out cutting to video according to different scenes, in turn
Convenient for the subsequent video storage based on different scenes and video extraction;Also, in compared to the prior art artificially to video
The method for carrying out cutting carries out cutting to video based on the quality of video frame, the efficiency of video slicing can be improved.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for handling video.The use
In the process 400 of the method for processing video, comprising the following steps:
Step 401, target video is obtained.
In the present embodiment, can lead to for handling the executing subject (such as server shown in FIG. 1) of the method for video
Wired connection mode or radio connection are crossed from remotely-or locally obtaining target video.Wherein, target video is to it
The video handled.
In practice, target video can be the video that is shot and be obtained to actual scene.
Step 402, the quality value sequence of target video is generated.
In the present embodiment, based on target video obtained in step 401, target video is can be generated in above-mentioned executing subject
Quality value sequence.Wherein, the element in quality value sequence be used to characterize time point in the time shaft of target video and with this
Time point corresponding mass value.Element in quality value sequence is arranged according to the sequencing of time.Mass value is at characterization
The quality of the video frame at the time point in time shaft.
Step 403, it based on the element in quality value sequence, generates using the time as abscissa, using mass value as ordinate
Line graph.
In the present embodiment, based on the element in the quality value sequence generated in step 402, above-mentioned executing subject can be given birth to
At using the time as abscissa, using mass value as the line graph of ordinate.
Specifically, above-mentioned executing subject can be by the element in quality value sequence using the time as abscissa, with mass value
It is the point in the coordinate system of ordinate as known point, is generated by the way of fitting corresponding with the time for characterizing mass value
The lines of relationship, as an example, Fig. 5 shows the schematic diagram of the line graph generated in the present embodiment.501 in figure be logical
The lines that the mode of over-fitting generates." 12:01,12:02,12:03,12:04, the 12:05 " marked out on abscissa in figure
Respectively correspond time point corresponding to the element in quality value sequence.Points pair corresponding with various time points in figure, on lines
Answer the element in quality value sequence.
Step 404, in response to determining that quality value sequence includes at least two local maximums, from least two local maximas
Two adjacent local maximums are chosen in value.
In the present embodiment, based on the quality value sequence obtained in step 402, above-mentioned executing subject can be in response to determination
Quality value sequence includes at least two local maximums, and two adjacent local maximas are chosen from least two local maximums
Value.Wherein, in quality value sequence, adjacent with local maximum mass value be less than local maximum.
Step 405, from mass value in quality value sequence, between selected adjacent two local maximum
The middle the smallest mass value of determination.
In the present embodiment, based on adjacent two local maximum chosen in step 404, above-mentioned executing subject can be with
The smallest quality is determined from mass value in quality value sequence, between selected adjacent two local maximum
Value.
Above-mentioned steps 401, step 402, step 404 and step 405 can be respectively adopted and the step in previous embodiment
201, the similar mode of step 202, step 203 and step 204 executes, above with respect to step 201, step 202, step 203 and
The description of step 204 is also applied for step 401, step 402, step 404 and step 405, and details are not described herein again.
Step 406, the object time range comprising time point corresponding to the smallest mass value is determined.
In the present embodiment, above-mentioned executing subject can include based on the smallest mass value obtained in step 405, determination
The object time range at time point corresponding to the smallest mass value.Wherein, object time range is contained in selected adjacent
Two local maximums corresponding time point composed by time range.Here, two adjacent local maximums point
Time range composed by not corresponding time point is specially with one in adjacent two local maximums corresponding time point
A time point is starting point, using another time point as the time range of terminal,
Specifically, above-mentioned executing subject can determine object time range using various methods, for example, above-mentioned executing subject
Directly time range composed by selected adjacent two local maximum corresponding time point can be determined as
Object time range, alternatively, can be composed by the selected adjacent two local maximum corresponding time point
Determine the time range comprising time point corresponding to the smallest mass value as object time model in time range at random
It encloses, alternatively, available preset duration t (such as 0.1s), it is assumed that time point corresponding to the smallest mass value is x, then by mesh
Mark time range is determined as (x-t, x+t).
As an example, with reference to Fig. 5,502 in figure it is corresponding for selected adjacent two local maximum when
Between time range composed by point (12:02 and 12:04).503 in figure be the object time range determined.
Step 407, target point is determined from the lines of line graph.
In the present embodiment, based on line graph obtained in step 403, above-mentioned executing subject can be from the lines of line graph
Middle determining target point, wherein be the smallest within the scope of the object time that mass value corresponding to target point determines in a step 406
Mass value.
As an example, with reference to Fig. 5,504 in figure be target point, corresponding to mass value in object time range
It is the smallest mass value in 503.
Step 408, based on the time corresponding to target point, cutting is carried out to target video, obtains cutting rear video.
In the present embodiment, based on the time corresponding to the target point determined in step 407, above-mentioned executing subject can be right
Target video carries out cutting, obtains cutting rear video.
As an example, with reference to Fig. 5,505 in figure be the time corresponding to target point 504 (be approximately 12:02.8 (12
2.8 seconds minutes)).
Specifically, above-mentioned executing subject can be using various methods to target video based on the time corresponding to target point
Carry out cutting.For example, above-mentioned executing subject can the directly time cutting corresponding to the target point so that in target video,
The video frame that acquisition is shot before the time and the video frame for shooting acquisition after such time separately constitute cutting rear video.
Herein, it should be noted that for corresponding to target point time shooting obtain video frame, may include in this when
Between before composed by the video frame that obtains of shooting in cutting rear video, also may include and obtained in shooting after such time
In cutting rear video composed by video frame, herein with no restrictions.
Figure 4, it is seen that the method for handling video compared with the corresponding embodiment of Fig. 2, in the present embodiment
Process 400 highlight based on quality value sequence generate line graph, from the lines of line graph determine target point, and then be based on mesh
Time corresponding to punctuate, the step of carrying out cutting to target video, obtain cutting rear video.The present embodiment describes as a result,
Scheme can analyze video corresponding mass value at various moments based on line graph on the whole can be more smart with this
It, in turn, can be more accurately according to different at the time of corresponding to minimum mass value during determining handoff scenario quasi-ly
Scene carries out cutting to video, improves the accuracy of video processing;In addition, based on the smallest matter corresponding to quality value sequence
Time point corresponding to magnitude determines target point, can reduce the range of search of the searched targets point on the lines of line graph, with
This, can be improved the efficiency of determining target point, reduce the consumption for determining the resource of target point.
With further reference to Fig. 6, as the realization to method shown in above-mentioned each figure, present disclose provides a kind of for video
One embodiment of device, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to
In various electronic equipments.
As shown in fig. 6, the device 600 for handling video of the present embodiment includes: that acquiring unit 601, first generates list
Member 602, selection unit 603, determination unit 604 and cutting unit 605.Wherein, acquiring unit 601 is configured to obtain target view
Frequently;First generation unit 602 is configured to generate the quality value sequence of target video, wherein the element in quality value sequence is used
Time point and mass value corresponding with the time point in the time shaft of characterization target video, when mass value is in for characterizing
Between time point in axis video frame quality, and the element in quality value sequence is arranged according to the sequencing of time;It chooses
Unit 603 is configured in response to determine that quality value sequence includes at least two local maximums, from least two local maximas
Two adjacent local maximums are chosen in value, wherein mass value in quality value sequence, adjacent with local maximum is less than
Local maximum;Determination unit 604 be configured to it is from quality value sequence, positioned at selected adjacent two local maxima
The smallest mass value is determined in mass value between value;Cutting unit 605 is configured to based on identified the smallest mass value
Corresponding time point carries out cutting to target video, obtains cutting rear video.
In the present embodiment, for handle video device 600 acquiring unit 601 can by wired connection mode or
Person's radio connection is from remotely-or locally obtaining target video.Wherein, target video is the video to be handled it.It is real
In trampling, target video can be the video that is shot and be obtained to actual scene.
In the present embodiment, mesh can be generated in the target video obtained based on acquiring unit 601, the first generation unit 602
Mark the quality value sequence of video.Wherein, the element in quality value sequence is used to characterize the time point in the time shaft of target video
With mass value corresponding with the time point.Element in quality value sequence is arranged according to the sequencing of time.Mass value is used for
The quality of the video frame at time point of the characterization in time shaft.
In the present embodiment, the quality value sequence obtained based on the first generation unit 602, selection unit 603 can respond
In determining that quality value sequence includes at least two local maximums, two adjacent offices are chosen from least two local maximums
Portion's maximum value.Wherein, in quality value sequence, adjacent with local maximum mass value be less than local maximum.
In the present embodiment, adjacent two local maximum chosen based on selection unit 603, determination unit 604 can
To determine the smallest matter from mass value in quality value sequence, between selected adjacent two local maximum
Magnitude.
In the present embodiment, time point corresponding to the smallest mass value determined based on determination unit 604, cutting unit
605 can carry out cutting to target video, obtain cutting rear video.
In some optional implementations of the present embodiment, device 600 can also include: the second generation unit (in figure
It is not shown), it is configured to generate based on the element in quality value sequence using the time as abscissa, using mass value as ordinate
Line graph;And cutting unit 605 may include: the first determining module (not shown), be configured to determine comprising minimum
Mass value corresponding to time point object time range, wherein object time range is contained in selected adjacent two
Time range composed by a local maximum corresponding time point;Second determining module (not shown), is configured
At target point determining from the lines of line graph, wherein mass value corresponding to target point is minimum within the object time
Mass value;Cutting module (not shown) is configured to cut target video based on the time corresponding to target point
Point, obtain cutting rear video.
In some optional implementations of the present embodiment, the first generation unit 602 may include: execution module (figure
In be not shown), be configured to execute following steps for the time point in time shaft corresponding to target video: from should
Destination number video frame is chosen in the video frame at time point;The video frame of selected destination number video frame is determined respectively
Mass value;Based on identified destination number video frame mass value, mass value corresponding to the time point is determined;Generation module
(not shown) is configured to the sequencing according to corresponding time point, arranges identified mass value, raw
At quality value sequence.
In some optional implementations of the present embodiment, execution module can be further configured to: for selected
The video frame in destination number video frame taken executes following steps: carrying out Laplace transform to the video frame, is become
Change rear video frame;Determine the variance of the pixel value of the pixel in transformation rear video frame;Identified variance is determined as the view
The video frame mass value of frequency frame.
In some optional implementations of the present embodiment, selection unit 603 can be further configured to: from least
Two local maximums that adjacent and corresponding difference is more than or equal to preset threshold are chosen in two local maximums.
In some optional implementations of the present embodiment, device 600 can also include: that storage unit (is not shown in figure
Out), it is configured to obtain the view for characterizing the cutting rear video for the cutting rear video in cutting rear video obtained
The label of frequency feature, and storage is associated to the cutting rear video and acquired label.
In some optional implementations of the present embodiment, device 600 can also include: transmission unit, be configured to
The video acquisition request that user terminal in response to receiving communication connection is sent, is executed following steps: being cut based on obtained
Divide label corresponding to rear video, cutting backsight corresponding with video acquisition request is chosen from cutting rear video obtained
Frequently;Selected cutting rear video is sent to user terminal, and control user terminal to selected cutting rear video into
Row plays.
It is understood that all units recorded in the device 600 and each step phase in the method with reference to Fig. 2 description
It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 600 and its
In include unit, details are not described herein.
The device provided by the above embodiment 600 of the disclosure can the quality based on video frame to video carry out cutting, by
During shooting video, the quality that the process of scene switching normally results in video frame is reduced, so based on video frame
Ropy time point carries out cutting to video during mass change, can carry out cutting to video according to different scenes,
And then convenient for the subsequent video storage based on different scenes and video extraction;Also, it is artificially right in compared to the prior art
The method that video carries out cutting carries out cutting to video based on the quality of video frame, the efficiency of video slicing can be improved.
Below with reference to Fig. 7, it illustrates the electronic equipment (end of example as shown in figure 1 for being suitable for being used to realize the embodiment of the present disclosure
End equipment 101,102,103 or server 105) 700 structural schematic diagram.Terminal device in the embodiment of the present disclosure may include
But it is not limited to such as mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (plate electricity
Brain), PMP (portable media player), the mobile terminal of car-mounted terminal (such as vehicle mounted guidance terminal) etc. and such as
The fixed terminal of digital TV, desktop computer etc..Electronic equipment shown in Fig. 7 is only an example, should not be to the disclosure
The function and use scope of embodiment bring any restrictions.
As shown in fig. 7, electronic equipment 700 may include processing unit (such as central processing unit, graphics processor etc.)
701, random access can be loaded into according to the program being stored in read-only memory (ROM) 702 or from storage device 708
Program in memory (RAM) 703 and execute various movements appropriate and processing.In RAM 703, it is also stored with electronic equipment
Various programs and data needed for 700 operations.Processing unit 701, ROM 702 and RAM 703 pass through the phase each other of bus 704
Even.Input/output (I/O) interface 705 is also connected to bus 704.
In general, following device can connect to I/O interface 705: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 706 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 707 of dynamic device etc.;Storage device 708 including such as tape, hard disk etc.;And communication device 709.Communication device
709, which can permit electronic equipment 700, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 7 shows tool
There is the electronic equipment 700 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 709, or from storage device 708
It is mounted, or is mounted from ROM 702.When the computer program is executed by processing unit 701, the embodiment of the present disclosure is executed
Method in the above-mentioned function that limits.
It should be noted that computer-readable medium described in the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not
It is fitted into the electronic equipment.Above-mentioned computer-readable medium carries one or more program, when said one or more
When a program is executed by the electronic equipment, so that the electronic equipment: obtaining target video;Generate the mass value sequence of target video
Column, wherein element in quality value sequence is used to characterize time point in the time shaft of target video and corresponding with the time point
Mass value, mass value is used to characterize the quality of the video frame at the time point being in time shaft, and the member in quality value sequence
Element is arranged according to the sequencing of time;In response to determining that quality value sequence includes at least two local maximums, from least two
Two adjacent local maximums are chosen in a local maximum, wherein in quality value sequence, adjacent with local maximum
Mass value is less than local maximum;From it is in quality value sequence, between selected adjacent two local maximum
The smallest mass value is determined in mass value;The time point based on corresponding to identified the smallest mass value, to target video into
Row cutting obtains cutting rear video.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard
The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, obtaining
Unit is taken to be also described as " obtaining the unit of target video ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure
Can technical characteristic replaced mutually and the technical solution that is formed.