CN108012190A - A kind of video merging method and device - Google Patents

A kind of video merging method and device Download PDF

Info

Publication number
CN108012190A
CN108012190A CN201711282856.2A CN201711282856A CN108012190A CN 108012190 A CN108012190 A CN 108012190A CN 201711282856 A CN201711282856 A CN 201711282856A CN 108012190 A CN108012190 A CN 108012190A
Authority
CN
China
Prior art keywords
video
combined
characteristic information
segment
track characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711282856.2A
Other languages
Chinese (zh)
Inventor
张引
刘鑫忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sohu New Media Information Technology Co Ltd
Original Assignee
Beijing Sohu New Media Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sohu New Media Information Technology Co Ltd filed Critical Beijing Sohu New Media Information Technology Co Ltd
Priority to CN201711282856.2A priority Critical patent/CN108012190A/en
Publication of CN108012190A publication Critical patent/CN108012190A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a kind of video merging method and device, the first audio track characteristic information and the first video track characteristic information are extracted from the first segment video to be combined of acquisition, the the second audio track characteristic information and the second video track characteristic information of the second segment video to be combined obtained using the first audio track characteristic information and the adjustment of the first video track characteristic information, finally the second segment video to be combined after first segment video to be combined and audio and video adjustment is merged, obtains merging video file.The present invention carries out corresponding audio and video adjustment by using the audio track characteristic information and video track characteristic information of first segment video to be combined to second segment video to be combined, so that the different two sections of videos of feature can have identical audio and video characteristic before the combining, so that the audio and video characteristic between the first half and latter half of video after merging is unified, therefore improve the viewing effect of the video after merging.

Description

A kind of video merging method and device
Technical field
The present invention relates to video production technical field, more specifically, being related to a kind of video merging method and device.
Background technology
With the raising of mobile terminal (such as smart mobile phone, IPAD) performance, domestic digital camera, DV by Step is substituted by mobile terminal.The popularization of simultaneous mobile terminal, people, which go on a tour, all to be shot photo with mobile terminal and regards Frequently, video be incorporated as activity commemorate to increasingly become a kind of spread path of people.
Existing video merging technique is only simply to merge video, and is often deposited between the video being merged In feature difference, cause the video after merging there may be the first half of video it is different with latter half feature the problem of, make Video-see after merging is ineffective.
The content of the invention
In view of this, the present invention discloses a kind of video merging method and device, is regarded with solve to be merged in traditional scheme Often existing characteristics difference between frequency, cause the video after merging there may be video first half and latter half feature not The problem of same, the problem of making the video-see after merging ineffective.
A kind of video merging method, including:
Obtain first segment video to be combined;
The first audio track characteristic information and the first video track characteristic information are extracted from first segment video to be combined;
Obtain second segment video to be combined;
The second segment is adjusted using the first audio track characteristic information and the first video track characteristic information to wait to close And the second audio track characteristic information and the second video track characteristic information of video;
Second segment video to be combined after first segment video to be combined and audio and video adjustment is merged, is closed And video file.
Preferably, step extracts the first audio track characteristic information and the second video track from first segment video to be combined Characteristic information, specifically includes:
Each acoustic amplitudes data of first segment video to be combined are extracted, and acoustic amplitudes average value is calculated, The acoustic amplitudes average value is the first audio track characteristic information;
Each two field picture of first segment video to be combined is counted according to color characteristic, and each frame is calculated The color characteristic average value of image, the color characteristic average value are the first video track characteristic information, wherein, the face Color characteristic includes:Form and aspect, saturation degree and brightness.
Preferably, step is using described in the first audio track characteristic information and the first video track characteristic information adjustment The the second audio track characteristic information and the second video track characteristic information of second segment video to be combined, specifically include:
The second segment video to be combined is adjusted using the acoustic amplitudes average value of first segment video to be combined The second audio track characteristic information;
It is each using GPUImage processing, generation to the color characteristic average value of first segment video to be combined A color parameter filter;
The second segment that each color parameter filter is added to after the adjustment of audio track characteristic information successively to be combined is regarded Frequently, corresponding color treatments are carried out to the second video track characteristic information of second segment video to be combined, forms institute State the second segment video to be combined after audio and video adjustment.
Preferably, step carries out the second segment video to be combined after first segment video to be combined and audio and video adjustment Merge, obtain merging video, specifically include:
By the after the first audio track characteristic information of first segment video to be combined and audio and video adjustment The audio track characteristic information of two sections of videos to be combined merges, and obtains Composite tone stream;
By the after the first video track characteristic information of first segment video to be combined and audio and video adjustment The video track characteristic information of two sections of videos to be combined merges, and obtains synthetic video stream;
The Composite tone stream and the synthetic video stream are merged, generate the merging video file.
Preferably, further include:
Preserve the merging video file.
A kind of video merges device, including:
First acquisition unit, for obtaining first segment video to be combined;
Video feature extraction unit, for extracted from first segment video to be combined first audio track characteristic information and First video track characteristic information;
Second acquisition unit, for obtaining second segment video to be combined;
Video features adjustment unit, for being believed using the first audio track characteristic information and the first video track feature Breath adjusts the second audio track characteristic information and the second video track characteristic information of the second segment video to be combined;
Video features combining unit, for waiting to close the second segment after first segment video to be combined and audio and video adjustment And video merges, obtain merging video file.
Preferably, the video feature extraction unit includes:
Feature extraction subelement, for extracting each acoustic amplitudes data of first segment video to be combined, and calculates Acoustic amplitudes average value is obtained, the acoustic amplitudes average value is the first audio track characteristic information;
Count and computation subunit, for each two field picture to first segment video to be combined according to color characteristic into Row statistics, and the color characteristic average value of each two field picture is calculated, the color characteristic average value is first video Rail characteristic information, wherein, the color characteristic includes:Form and aspect, saturation degree and brightness.
Preferably, the video features adjustment unit includes:
Character adjustment subelement, for adjusting institute using the acoustic amplitudes average value of first segment video to be combined State the second audio track characteristic information of second segment video to be combined;
Filter generates subelement, for the color characteristic average value use to first segment video to be combined GPUImage processing, generates each color parameter filter;
Color treatments subelement, for each color parameter filter to be added to the adjustment of audio track characteristic information successively Second segment video to be combined afterwards, carries out accordingly the second video track characteristic information of second segment video to be combined Color treatments, form the second segment video to be combined after audio and video adjustment.
Preferably, the video features combining unit includes:
Audio stream synthesizes subelement, for by the first audio track characteristic information of first segment video to be combined and The audio track characteristic information of second segment video to be combined after the audio and video adjustment merges, and obtains Composite tone stream;
Video flowing synthesizes subelement, for by the first video track characteristic information of first segment video to be combined and The video track characteristic information of second segment video to be combined after the audio and video adjustment merges, and obtains synthetic video stream;
Video file synthesizes subelement, for the Composite tone stream and the synthetic video stream to be merged, generation The merging video file.
Preferably, further include:
Storage unit, for preserving the merging video file.
It was found from above-mentioned technical solution, the invention discloses a kind of video merging method and device, from the first of acquisition The first audio track characteristic information and the first video track characteristic information are extracted in section video to be combined, is believed using the first audio track feature The the second audio track characteristic information and second for the second segment video to be combined that breath and the adjustment of the first video track characteristic information obtain regards Frequency rail characteristic information, finally merges the second segment video to be combined after first segment video to be combined and audio and video adjustment, Obtain merging video file.The present invention believes by using the audio track characteristic information and video track feature of first segment video to be combined Breath carries out corresponding audio and video adjustment to second segment video to be combined so that two sections of different videos of feature before the combining can Enough there is identical audio and video characteristic, so that the audio and video characteristic between the first half and latter half of the video after merging It is unified, therefore improve the viewing effect of the video after merging.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, drawings in the following description are only this The embodiment of invention, for those of ordinary skill in the art, without creative efforts, can also basis Disclosed attached drawing obtains other attached drawings.
Fig. 1 is a kind of flow chart of video merging method disclosed by the embodiments of the present invention;
Fig. 2 is the flow chart of another video merging method disclosed by the embodiments of the present invention;
Fig. 3 is the flow chart of another video merging method disclosed by the embodiments of the present invention;
Fig. 4 is the structure diagram that a kind of video disclosed by the embodiments of the present invention merges device;
Fig. 5 is the structure diagram that another video disclosed by the embodiments of the present invention merges device.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other without making creative work Embodiment, belongs to the scope of protection of the invention.
The embodiment of the invention discloses a kind of video merging method and device, to solve the video being merged in traditional scheme Between often existing characteristics difference, causing the video after merging, there may be the first half and latter half feature of video are different The problem of, the problem of making the video-see after merging ineffective.
Referring to Fig. 1, a kind of flow chart of video merging method disclosed in one embodiment of the invention, this method is applied to movement Terminal, the mobile terminal include smart mobile phone, IPAD etc., and the method comprising the steps of:
Step S101, first segment video to be combined is obtained;
Specifically, in practical applications, first segment video to be combined is loaded into using mobile terminal.
Step S102, the first audio track characteristic information and the first video track feature letter are extracted from first segment video to be combined Breath;
Specifically, the first segment video to be combined that step S101 is obtained is solved using the decoder that mobile terminal carries Code, obtains corresponding video bytes of stream data, and extract from the video bytes of stream data the first audio track characteristic information and First video track characteristic information.
First audio track characteristic information includes:Acoustic amplitudes data.
First video track characteristic information includes but not limited to:Form and aspect, saturation degree and brightness.
Step S103, second segment video to be combined is obtained;
Specifically, in practical applications, second segment video to be combined is loaded into using mobile terminal.
Step S104, using the first audio track characteristic information and the adjustment of the first video track characteristic information, second segment is to be combined regards The the second audio track characteristic information and the second video track characteristic information of frequency;
Specifically, using the first audio track characteristic information of first segment video to be combined and the first video track characteristic information as Benchmark, the second audio track characteristic information of adjustment second segment video to be combined have identical spy with the first audio track characteristic information Reference ceases, and the second video track characteristic information of adjustment has identical characteristic information with the first video track characteristic information, so that First segment video to be combined and second segment video to be combined have identical audio and video characteristic.
It should be noted that, it is necessary to utilize mobile terminal before audio and video adjustment is carried out to second segment video to be combined The decoder carried decodes second segment video to be combined, corresponding video bytes of stream data is obtained, then to the video Bytes of stream data is based on the first audio track characteristic information and the first video track characteristic information carries out audio and video adjustment.
Step S105, the second segment video to be combined after first segment video to be combined and audio and video adjustment is merged, Obtain merging video file.
Specifically, the second segment after the first audio track characteristic information of first segment video to be combined is adjusted with audio and video is treated The audio track characteristic information for merging video merges, and obtains Composite tone stream;
Second segment after first video track characteristic information of first segment video to be combined and audio and video are adjusted is to be combined to be regarded The video track characteristic information of frequency merges, and obtains synthetic video stream;
Composite tone stream and synthetic video stream are merged, generation merges video file.
It should be noted that the second segment video to be combined after first segment video to be combined and audio and video adjustment will be carried out Merge, obtain merging video stream data first;Then the merging video stream data is sent to the encoder progress of mobile terminal Coding, forms and merges video file.
It should be noted that when being merged to multistage video, one section therein video conduct to be combined is arbitrarily chosen REF video, extracts the audio track characteristic information and video track characteristic information of the REF video;Then with the sound of the REF video Frequency rail characteristic information and video track characteristic information adjust benchmark as the audio and video of remaining video to be combined, respectively wait to close to remaining And the audio and video of video adjust accordingly;Finally each video to be combined after REF video and audio and video adjustment is closed And obtain merging video file.
In summary, the present invention believes by using the audio track characteristic information and video track feature of first segment video to be combined Breath carries out corresponding audio and video adjustment to second segment video to be combined so that two sections of different videos of feature before the combining can Enough there is identical audio and video characteristic, so that the audio and video characteristic between the first half and latter half of the video after merging It is unified, therefore improve the viewing effect for merging rear video.
In order to further optimize the above embodiments, referring to Fig. 2, a kind of video merging method disclosed in another embodiment of the present invention Flow chart, the method comprising the steps of:
Step S201, first segment video to be combined is obtained;
Step S202, each acoustic amplitudes data of first segment video to be combined are extracted, and acoustic amplitudes are calculated and put down Average, the acoustic amplitudes average value are the first audio track characteristic information;
It should be noted that after acoustic amplitudes average value is calculated, also need to preserve the acoustic amplitudes average value, with Audio track characteristic information for follow-up adjustment second segment video to be combined provides foundation.
Step S203, each two field picture of first segment video to be combined is counted according to color characteristic, and calculated To the color characteristic average value of each two field picture, the color characteristic average value is the first video track characteristic information, wherein, it is described Color characteristic includes but not limited to:Form and aspect, saturation degree and brightness;
Color characteristic average value includes but not limited to:Form and aspect average value, saturation degree average value and average brightness.
After the color characteristic average value of each two field picture is calculated, also need to preserve the color characteristic average value, with for The video track characteristic information of follow-up adjustment second segment video to be combined provides foundation.
In practical applications, the execution sequence of step S202 and step S203 are not limited to the execution shown in the present embodiment Sequentially, step S203 can also be first carried out, rear execution step S202, or step S202 and step S203 are performed at the same time.
It should be noted that the step S202 and step S203 in the present embodiment are step in embodiment illustrated in fig. 1 The specific implementation process of S102.
Step S204, using first segment video to be combined acoustic amplitudes average value adjustment second segment video to be combined the Two audio track characteristic informations;
In the present embodiment, by using the acoustic amplitudes average value adjustment of first segment video to be combined, second segment is to be combined regards Second audio track characteristic information of frequency, can cause first segment video to be combined and second segment video to be combined to have identical sound Frequency characteristic information.
Step S205, GPUImage processing, generation are used to the color characteristic average value of first segment video to be combined Each color parameter filter;
It should be noted that GPUImage is a kind of Open Framework for doing filter, mobile terminal can use GPUImage The color characteristic average value of first segment video to be combined is handled, obtains each color parameter filter.Color parameter filter Including but not limited to:Form and aspect filter, saturation degree filter, brightness filter and exposure filter etc..
Specifically, at mobile platform (android and iOS), each color parameter filter can be generated using GPUImage, Wherein, adjustment brightness can use GPUImageBrightnessFilter filters;Adjustment exposure can use GPUImageExposureFilter filters, adjustment distribution of color can use GPUImageFalseColorFilter filters Deng.
GPUImage provides abundant filter species, as long as from first segment video acquisition color parameter to be combined, uses GPUImage generates corresponding color parameter filter, you can adds filter using the corresponding chain of GPUImage, second segment is treated The video for merging video is adjusted correspondingly.
Step S206, the second segment that each color parameter filter is added to after the adjustment of audio track characteristic information successively is waited to close And video, corresponding color treatments are carried out to the second video track characteristic information of second segment video to be combined, form audio and video Second segment video to be combined after adjustment;
It should be noted that using the first audio track characteristic information and the first video track characteristic information, adjustment second segment is treated The process of the second audio track characteristic information and the second video track characteristic information that merge video includes but not limited to the present embodiment step Rapid S204~process shown in step S206, can also adjust second segment video to be combined first with the first video track characteristic information Second video track characteristic information, then recycles second to be combined video of the first audio track characteristic information to progress video adjustment Audio frequency characteristics be adjusted.Wherein, it is special to the second audio track characteristic information of second segment video to be combined and the second video track Depending on the order concrete foundation that reference breath is adjusted is actually needed, the present invention does not limit herein.
Wherein, in step S204~process shown in step S206 namely embodiment illustrated in fig. 1 step S104 specific implementation Process.
Step S207, the second segment video to be combined after first segment video to be combined and audio and video adjustment is merged, Obtain merging video file.
In summary, the present invention believes by using the audio track characteristic information and video track feature of first segment video to be combined Breath carries out corresponding audio and video adjustment to second segment video to be combined so that two sections of different videos of feature before the combining can Enough there is identical audio and video characteristic, so that the audio and video characteristic between the first half and latter half of the video after merging It is unified, therefore improve the viewing effect for merging rear video.
In addition, the video merging method based on video features that the present invention is realized, can be regarded on mobile terminals Frequency merges, without additionally by other soft hardware equipments.Also, whole technical solution is easier to realize, in mobile terminal and PC ends can obtain good effect.
It is appreciated that merging video file for convenience of follow-up viewing, also need to preserve to merging video file.
Referring to Fig. 3, a kind of flow chart of video merging method disclosed in another embodiment of the present invention, this method is applied to move Dynamic terminal, the mobile terminal include smart mobile phone, IPAD etc., and the method comprising the steps of:
Step S301, first segment video to be combined is obtained;
Step S302, the first audio track characteristic information is extracted from first segment video to be combined and the first video track is special Reference ceases;
First audio track characteristic information includes:Acoustic amplitudes data.
First video track characteristic information includes but not limited to:Form and aspect, saturation degree and the brightness of image.
Step S303, second segment video to be combined is obtained;
Step S304, using the first audio track characteristic information and the adjustment of the first video track characteristic information, second segment is to be combined regards The the second audio track characteristic information and the second video track characteristic information of frequency;
Step S305, the second segment video to be combined after first segment video to be combined and audio and video adjustment is merged, Obtain merging video file;
Step S306, preserve and merge video file.
It should be noted that in the present embodiment, the concrete operating principle of step S301~step S205, refers to above-mentioned reality A corresponding part is applied, details are not described herein again.
Corresponding with above method embodiment, the invention also discloses a kind of video to merge device.
Referring to Fig. 4, a kind of video merges the structure diagram of device disclosed in one embodiment of the invention, which is applied to Mobile terminal, the mobile terminal include smart mobile phone, IPAD etc., which includes:
First acquisition unit 401, for obtaining first segment video to be combined;
Specifically, in practical applications, first segment video to be combined is loaded into using mobile terminal.
Video feature extraction unit 402, for extracting the first audio track feature letter from first segment video to be combined Breath and the first video track characteristic information;
Specifically, regarded using the first segment that the decoder that mobile terminal carries obtains first acquisition unit 401 is to be combined Frequency is decoded, and obtains corresponding video bytes of stream data, and the first audio track spy is extracted from the video bytes of stream data Reference ceases and the first video track characteristic information.
First audio track characteristic information includes:Acoustic amplitudes data.
First video track characteristic information includes but not limited to:Form and aspect, saturation degree and brightness.
Second acquisition unit 403, for obtaining second segment video to be combined;
Specifically, in practical applications, second segment video to be combined is loaded into using mobile terminal.
Video features adjustment unit 404, for special using the first audio track characteristic information and first video track Reference breath adjusts the second audio track characteristic information and the second video track characteristic information of the second segment video to be combined;
Specifically, using the first audio track characteristic information of first segment video to be combined and the first video track characteristic information as Benchmark, the second audio track characteristic information of adjustment second segment video to be combined have identical spy with the first audio track characteristic information Reference ceases, and the second video track characteristic information of adjustment has identical characteristic information with the first video track characteristic information, so that First segment video to be combined and second segment video to be combined have identical audio and video characteristic.
It should be noted that, it is necessary to utilize mobile terminal before audio and video adjustment is carried out to second segment video to be combined The decoder carried decodes second segment video to be combined, corresponding video bytes of stream data is obtained, then to the video Bytes of stream data is based on the first audio track characteristic information and the first video track characteristic information carries out audio and video adjustment.
Video features combining unit 405, for the second segment after adjusting first segment video to be combined and audio and video Video to be combined merges, and obtains merging video file.
Wherein, video features combining unit 405 specifically includes:
Audio stream synthesizes subelement, for by the first audio track characteristic information of first segment video to be combined and audio and video tune The audio track characteristic information of second segment video to be combined after whole merges, and obtains Composite tone stream;
Video flowing synthesizes subelement, for by the first video track characteristic information of first segment video to be combined and audio and video tune The video track characteristic information of second segment video to be combined after whole merges, and obtains synthetic video stream;
Video file synthesizes subelement, and for Composite tone stream and synthetic video stream to be merged, generation merges video File.
It should be noted that the second segment video to be combined after first segment video to be combined and audio and video adjustment will be carried out Merge, obtain merging video stream data first;Then the merging video stream data is sent to the encoder progress of mobile terminal Coding, forms and merges video file.
It should be noted that when being merged to multistage video, using embodiment illustrated in fig. 4, arbitrarily choose therein One section of video to be combined extracts the audio track characteristic information and video track characteristic information of the REF video as REF video;So The audio and video adjustment of remaining video to be combined is used as using the audio track characteristic information of the REF video and video track characteristic information afterwards Benchmark, adjusts accordingly the audio and video of remaining each video to be combined;Finally by after REF video and audio and video adjustment Each video to be combined merges, and obtains merging video file.
In summary, the present invention believes by using the audio track characteristic information and video track feature of first segment video to be combined Breath carries out corresponding audio and video adjustment to second segment video to be combined so that two sections of different videos of feature before the combining can Enough there is identical audio and video characteristic, so that the audio and video characteristic between the first half and latter half of the video after merging It is unified, therefore improve the viewing effect for merging rear video.
In order to further optimize the above embodiments, referring to Fig. 5, a kind of video disclosed in another embodiment of the present invention merges device Structure diagram, which includes:
First acquisition unit 501, for obtaining first segment video to be combined;
Feature extraction subelement 502, for extracting each acoustic amplitudes data of first segment video to be combined, and is counted Calculation obtains acoustic amplitudes average value, and the acoustic amplitudes average value is the first audio track characteristic information;
It should be noted that after acoustic amplitudes average value is calculated, also need to preserve the acoustic amplitudes average value, with Audio track characteristic information for follow-up adjustment second segment video to be combined provides foundation.
Simultaneously computation subunit 503 is counted, it is special according to color for each two field picture to first segment video to be combined Sign is counted, and the color characteristic average value of each two field picture is calculated, and the color characteristic average value is described first Video track characteristic information, wherein, the color characteristic includes but not limited to:Form and aspect, saturation degree and brightness;
Color characteristic average value includes but not limited to:Form and aspect average value, saturation degree average value and average brightness.
After the color characteristic average value of each two field picture is calculated, also need to preserve the color characteristic average value, with for The video track characteristic information of follow-up adjustment second segment video to be combined provides foundation.
In practical applications, feature extraction subelement 502 and the execution sequence of statistics computation subunit 503 include but unlimited In the execution sequence shown in the present embodiment, statistics computation subunit 503 can also be first carried out, it is rear to perform feature extraction subelement 502, or feature extraction subelement 502 and statistics computation subunit 503 perform at the same time.
Specific group of feature extraction subelement 502 and statistics computation subunit 503 namely video feature extraction unit 402 Into.
Character adjustment subelement 504, for the acoustic amplitudes average value tune using first segment video to be combined The second audio track characteristic information of whole second segment video to be combined;
In the present embodiment, by using the acoustic amplitudes average value adjustment of first segment video to be combined, second segment is to be combined regards Second audio track characteristic information of frequency, can cause first segment video to be combined and second segment video to be combined to have identical sound Frequency characteristic information.
Filter generates subelement 505, for the color characteristic average value use to first segment video to be combined GPUImage processing, generates each color parameter filter;
It should be noted that GPUImage is a kind of Open Framework for doing filter, mobile terminal can use GPUImage The color characteristic average value of first segment video to be combined is handled, obtains each color parameter filter.Color parameter filter Including but not limited to:Form and aspect filter, saturation degree filter, brightness filter and exposure filter etc..
Color treatments subelement 506, for each color parameter filter to be added to audio track characteristic information successively Second segment video to be combined after adjustment, carries out the second video track characteristic information of second segment video to be combined Corresponding color treatments, form the second segment video to be combined after the audio and video adjustment;
It should be noted that using the first audio track characteristic information and the first video track characteristic information, adjustment second segment is treated The process of the second audio track characteristic information and the second video track characteristic information that merge video includes but not limited to the present embodiment spy The execution sequence of whole subelement 504, filter generation subelement 505 and color treatments subelement 506 is requisitioned, in practical applications, Other execution sequences can also be used, for details, reference can be made to embodiment of the method corresponding part, details are not described herein again.
Wherein, Character adjustment subelement 504, filter generation subelement 505 and color treatments subelement 506 namely video are special Levy the concrete composition of adjustment unit 404.
Video features combining unit 507, for the second segment after adjusting first segment video to be combined and audio and video Video to be combined merges, and obtains merging video file.
In summary, the present invention believes by using the audio track characteristic information and video track feature of first segment video to be combined Breath carries out corresponding audio and video adjustment to second segment video to be combined so that two sections of different videos of feature before the combining can Enough there is identical audio and video characteristic, so that the audio and video characteristic between the first half and latter half of the video after merging It is unified, therefore improve the viewing effect for merging rear video.
In addition, the video merging method based on video features that the present invention is realized, can be regarded on mobile terminals Frequency merges, without additionally by other soft hardware equipments.Also, whole technical solution is easier to realize, in mobile terminal and PC ends can obtain good effect.
In order to further optimize the above embodiments, on the basis of above-described embodiment, video merges device and further includes:
Storage unit, merges video file, to facilitate user subsequently to watch merging video file for preserving.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or order.Moreover, term " comprising ", "comprising" or its any other variant meaning Covering non-exclusive inclusion, so that process, method, article or equipment including a series of elements not only include that A little key elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic key element of equipment.In the absence of more restrictions, the key element limited by sentence "including a ...", is not arranged Except also there are other identical element in the process, method, article or apparatus that includes the element.
Each embodiment is described by the way of progressive in this specification, what each embodiment stressed be and other The difference of embodiment, between each embodiment identical similar portion mutually referring to.
The foregoing description of the disclosed embodiments, enables professional and technical personnel in the field to realize or use the present invention. A variety of modifications to these embodiments will be apparent for those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, it is of the invention The embodiments shown herein is not intended to be limited to, and is to fit to and the principles and novel features disclosed herein phase one The most wide scope caused.

Claims (10)

  1. A kind of 1. video merging method, it is characterised in that including:
    Obtain first segment video to be combined;
    The first audio track characteristic information and the first video track characteristic information are extracted from first segment video to be combined;
    Obtain second segment video to be combined;
    Adjust that the second segment is to be combined to be regarded using the first audio track characteristic information and the first video track characteristic information The the second audio track characteristic information and the second video track characteristic information of frequency;
    Second segment video to be combined after first segment video to be combined and audio and video adjustment is merged, merging is obtained and regards Frequency file.
  2. 2. video merging method according to claim 1, it is characterised in that step is from first segment video to be combined The first audio track characteristic information and the second video track characteristic information are extracted, is specifically included:
    Each acoustic amplitudes data of first segment video to be combined are extracted, and acoustic amplitudes average value is calculated, it is described Acoustic amplitudes average value is the first audio track characteristic information;
    Each two field picture of first segment video to be combined is counted according to color characteristic, and each two field picture is calculated Color characteristic average value, the color characteristic average value is the first video track characteristic information, wherein, the color is special Sign includes:Form and aspect, saturation degree and brightness.
  3. 3. video merging method according to claim 2, it is characterised in that step is believed using the first audio track feature Breath and the first video track characteristic information adjust the second audio track characteristic information and second of the second segment video to be combined Video track characteristic information, specifically includes:
    The institute of the second segment video to be combined is adjusted using the acoustic amplitudes average value of first segment video to be combined State the second audio track characteristic information;
    To the color characteristic average value of first segment video to be combined using GPUImage processing, each face is generated Color parameter filter;
    Each color parameter filter is added to the second segment video to be combined after the adjustment of audio track characteristic information successively, it is right The second video track characteristic information of the second segment video to be combined carries out corresponding color treatments, forms the sound and regards Second segment video to be combined after frequency modulation is whole.
  4. 4. video merging method according to claim 1, it is characterised in that step by first segment video to be combined and Second segment video to be combined after audio and video adjustment merges, and obtains merging video, specifically includes:
    By the second segment after the first audio track characteristic information of first segment video to be combined and audio and video adjustment The audio track characteristic information of video to be combined merges, and obtains Composite tone stream;
    By the second segment after the first video track characteristic information of first segment video to be combined and audio and video adjustment The video track characteristic information of video to be combined merges, and obtains synthetic video stream;
    The Composite tone stream and the synthetic video stream are merged, generate the merging video file.
  5. 5. video merging method according to claim 1, it is characterised in that further include:
    Preserve the merging video file.
  6. 6. a kind of video merges device, it is characterised in that including:
    First acquisition unit, for obtaining first segment video to be combined;
    Video feature extraction unit, for extracting the first audio track characteristic information and first from first segment video to be combined Video track characteristic information;
    Second acquisition unit, for obtaining second segment video to be combined;
    Video features adjustment unit, for utilizing the first audio track characteristic information and the first video track characteristic information tune The the second audio track characteristic information and the second video track characteristic information of whole second segment video to be combined;
    Video features combining unit, for the second segment after first segment video to be combined and audio and video adjustment to be combined to be regarded Frequency merges, and obtains merging video file.
  7. 7. video according to claim 6 merges device, it is characterised in that the video feature extraction unit includes:
    Feature extraction subelement, for extracting each acoustic amplitudes data of first segment video to be combined, and is calculated Acoustic amplitudes average value, the acoustic amplitudes average value are the first audio track characteristic information;
    Computation subunit is counted, for uniting to each two field picture of first segment video to be combined according to color characteristic Meter, and the color characteristic average value of each two field picture is calculated, the color characteristic average value is that first video track is special Reference ceases, wherein, the color characteristic includes:Form and aspect, saturation degree and brightness.
  8. 8. video according to claim 7 merges device, it is characterised in that the video features adjustment unit includes:
    Character adjustment subelement, for the acoustic amplitudes average value adjustment described the using first segment video to be combined The second audio track characteristic information of two sections of videos to be combined;
    Filter generates subelement, for using GPUImage to the color characteristic average value of first segment video to be combined Handled, generate each color parameter filter;
    Color treatments subelement, after each color parameter filter is added to the adjustment of audio track characteristic information successively Second segment video to be combined, corresponding face is carried out to the second video track characteristic information of second segment video to be combined Color processing, forms the second segment video to be combined after the audio and video adjustment.
  9. 9. video according to claim 6 merges device, it is characterised in that the video features combining unit includes:
    Audio stream synthesizes subelement, for by the first audio track characteristic information of first segment video to be combined and described The audio track characteristic information of second segment video to be combined after audio and video adjustment merges, and obtains Composite tone stream;
    Video flowing synthesizes subelement, for by the first video track characteristic information of first segment video to be combined and described The video track characteristic information of second segment video to be combined after audio and video adjustment merges, and obtains synthetic video stream;
    Video file synthesizes subelement, for the Composite tone stream and the synthetic video stream to be merged, described in generation Merge video file.
  10. 10. video according to claim 6 merges device, it is characterised in that further includes:
    Storage unit, for preserving the merging video file.
CN201711282856.2A 2017-12-07 2017-12-07 A kind of video merging method and device Pending CN108012190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711282856.2A CN108012190A (en) 2017-12-07 2017-12-07 A kind of video merging method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711282856.2A CN108012190A (en) 2017-12-07 2017-12-07 A kind of video merging method and device

Publications (1)

Publication Number Publication Date
CN108012190A true CN108012190A (en) 2018-05-08

Family

ID=62057345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711282856.2A Pending CN108012190A (en) 2017-12-07 2017-12-07 A kind of video merging method and device

Country Status (1)

Country Link
CN (1) CN108012190A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194886A (en) * 2018-08-31 2019-01-11 小黄鹿(杭州)科技有限公司 A kind of multifunctional entertainment interactive terminal
CN110855905A (en) * 2019-11-29 2020-02-28 联想(北京)有限公司 Video processing method and device and electronic equipment
WO2020103548A1 (en) * 2018-11-21 2020-05-28 北京达佳互联信息技术有限公司 Video synthesis method and device, and terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1942970A (en) * 2004-04-15 2007-04-04 皇家飞利浦电子股份有限公司 Method of generating a content item having a specific emotional influence on a user
CN103916607A (en) * 2014-03-25 2014-07-09 厦门美图之家科技有限公司 Method for processing multiple videos
CN104584618A (en) * 2012-07-20 2015-04-29 谷歌公司 MOB source phone video collaboration
CN105611404A (en) * 2015-12-31 2016-05-25 北京东方云图科技有限公司 Method and device for automatically adjusting audio volume according to video application scenes
CN105681891A (en) * 2016-01-28 2016-06-15 杭州秀娱科技有限公司 Mobile terminal used method for embedding user video in scene
US20170278546A1 (en) * 2016-03-25 2017-09-28 Samsung Electronics Co., Ltd. Method and device for processing multimedia information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1942970A (en) * 2004-04-15 2007-04-04 皇家飞利浦电子股份有限公司 Method of generating a content item having a specific emotional influence on a user
CN104584618A (en) * 2012-07-20 2015-04-29 谷歌公司 MOB source phone video collaboration
CN103916607A (en) * 2014-03-25 2014-07-09 厦门美图之家科技有限公司 Method for processing multiple videos
CN105611404A (en) * 2015-12-31 2016-05-25 北京东方云图科技有限公司 Method and device for automatically adjusting audio volume according to video application scenes
CN105681891A (en) * 2016-01-28 2016-06-15 杭州秀娱科技有限公司 Mobile terminal used method for embedding user video in scene
US20170278546A1 (en) * 2016-03-25 2017-09-28 Samsung Electronics Co., Ltd. Method and device for processing multimedia information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
宗小忠,徐光宏: "《多媒体技术与应用》", 30 June 2012 *
迅连科技: "威力导演-色彩配对&色彩查找表", 《HTTPS://V.YOUKU.COM/V_SHOW/ID_XMZAWNZI3OTAWOA==.HTML?SPM=A2H0K.11417342.SORESULTS.DTITLE》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194886A (en) * 2018-08-31 2019-01-11 小黄鹿(杭州)科技有限公司 A kind of multifunctional entertainment interactive terminal
WO2020103548A1 (en) * 2018-11-21 2020-05-28 北京达佳互联信息技术有限公司 Video synthesis method and device, and terminal and storage medium
US11551726B2 (en) 2018-11-21 2023-01-10 Beijing Dajia Internet Information Technology Co., Ltd. Video synthesis method terminal and computer storage medium
CN110855905A (en) * 2019-11-29 2020-02-28 联想(北京)有限公司 Video processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN104796767B (en) A kind of cloud video editing method and system
CN103391414B (en) A kind of video process apparatus and processing method for being applied to cell phone platform
CN104902282B (en) The processing method and processing device of embedded watermark picture in video frame
CN108012190A (en) A kind of video merging method and device
KR100785013B1 (en) Methods and apparatuses for generating and recovering 3D compression data
CN107613357A (en) Sound picture Synchronous fluorimetry method, apparatus and readable storage medium storing program for executing
CN103369289A (en) Communication method of video simulation image and device
CN110809173B (en) Virtual live broadcast method and system based on AR augmented reality of smart phone
CN104717509B (en) A kind of video encoding/decoding method and device
CN105227864A (en) A kind of picture generates animation and splices with video segment the video editing method synthesized
CN105704559A (en) Poster generation method and apparatus thereof
CN104486558A (en) Video processing method and device for simulating shooting scene
CN103297729A (en) Video processing method and device
CN109040773A (en) A kind of video improvement method, apparatus, equipment and medium
CN106792155A (en) A kind of method and device of the net cast of multiple video strems
CN106961629A (en) A kind of video encoding/decoding method and device
CN110012336A (en) Picture configuration method, terminal and the device at interface is broadcast live
CN114598919A (en) Video processing method, video processing device, computer equipment and storage medium
CN106604047A (en) Multi-video-stream video direct broadcasting method and device
CN108337529A (en) A kind of exchange method and live streaming client of the net cast median surface based on ios systems
CN103974062B (en) Image display device, image display system and method for displaying image
CN107396200A (en) The method that net cast is carried out based on social software
CN107231578A (en) The system and method that video file is quickly played
CN110493604A (en) A method of 8K HEVC real-time coding is realized based on GPU cluster
ATE319127T1 (en) DATA COMPRESSION THROUGH OFFSET REPRESENTATION

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180508

RJ01 Rejection of invention patent application after publication